• Title/Summary/Keyword: Auditory Image

Search Result 66, Processing Time 0.024 seconds

Aesthetic Study of Film Sound Inherent in Hitchcock's (히치콕 <사이코>에 내재된 영화 사운드의 미학적 고찰)

  • Park, Byung-Kyu
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.6
    • /
    • pp.26-33
    • /
    • 2014
  • From a film esthetic point of view, this paper deals with all the sound elements which are speech, noise, and music for the signification of sound in Hitchcock's . The speech makes a mental image auditory through voice-over, and sometimes it has the indiscernibleness of life and death to be incarnate. This paper has demonstrated that the noise also can mark punctuation-narrative boundary besides visual techniques pointed out by Metz, and it cites the sound of falling water which completes shower scene, offsetting a scream in audience's mind. In the music, desire and oppression are symbolized and they are making a dissonance. Upon occasion, the coexistence of two chords represents duplicity in Norman-mother. Also, the music may disappear in the way of silence, being mummified in the time paused. Thus, the common filmic signification of sounds in can be called reconceptualization of the image.

A study on the Cochlear View in Multichannel Cochlear Implantees (인공와우 이식술 환자의 Cochlear View 촬영에 관한 연구)

  • Kweon, Dae-Cheol;Kim, Jeong-Hee;Kim, Seong-Lyong;Kim, Hae-Seong;Lee, Yong-Woo
    • Journal of radiological science and technology
    • /
    • v.22 no.2
    • /
    • pp.27-32
    • /
    • 1999
  • Cochlear implant poses a contraindication to the magnetic resonance imaging(MRI) process, because MRI generates artifacts, inducing an electrical current and causing device magnetization. CT is relatively expensive and the metal electrodes scatter the image. Post-implantation radiological studies using anterior-posterior transorbital, submental-vertex and lateral views, the intracochlear electrodes are not well displayed. Therefore, the authors developed a special view, which we call the cochlear view. The patient is sitting in front of a vertical device. Then the midsagittal plane is adjusted to form an angle of $15^{\circ},\;30^{\circ}$, and $45^{\circ}$ with the film. The flexion of the neck is adjusted to make the infraorbitomeatal line(IOML) is parallel with the transverse axis of the film. The central ray is directed to exit from the skull at point which is 3.0 cm anterior and 2.0 cm superior to the EAM(external auditory meatus). Results have shown that single radiography of the cochlear view provides sufficient information to demonstrate the position of the electrodes array and the depth of insertion in cochlear. Radiography of the cochlear view in angle of $45^{\circ}$ is an excellent image. The cochlear view gives the greatest amount of medical information with the least radiation and lowest medical cost. It can be widely used in all cochlear implant clinics.

  • PDF

Emotion Classification Method Using Various Ocular Features (다양한 눈의 특징 분석을 통한 감성 분류 방법)

  • Kim, Yoonkyoung;Won, Myoung Ju;Lee, Eui Chul
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.10
    • /
    • pp.463-471
    • /
    • 2014
  • In this paper, emotion classification was performed by using four ocular features extracted from near-infrared camera image. According to comparing with previous work, the proposed method used more ocular features and each feature was validated as significant one in terms of emotion classification. To minimize side effects on ocular features caused by using visual stimuli, auditory stimuli for causing two opposite emotion pairs such as "positive-negative" and "arousal-relaxation" were used. As four features for emotion classification, pupil size, pupil accommodation rate, blink frequency, and eye cloased duration were adopted which could be automatically extracted by using lab-made image processing software. At result, pupil accommodation rate and blink frequency were statistically significant features for classification arousal-relaxation. Also, eye closed duration was the most significant feature for classification positive-negative.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Analysis of Facial Movement According to Opposite Emotions (상반된 감성에 따른 안면 움직임 차이에 대한 분석)

  • Lee, Eui Chul;Kim, Yoon-Kyoung;Bea, Min-Kyoung;Kim, Han-Sol
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.10
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, a study on facial movements are analyzed in terms of opposite emotion stimuli by image processing of Kinect facial image. To induce two opposite emotion pairs such as "Sad - Excitement"and "Contentment - Angry" which are oppositely positioned onto Russell's 2D emotion model, both visual and auditory stimuli are given to subjects. Firstly, 31 main points are chosen among 121 facial feature points of active appearance model obtained from Kinect Face Tracking SDK. Then, pixel changes around 31 main points are analyzed. In here, local minimum shift matching method is used in order to solve a problem of non-linear facial movement. At results, right and left side facial movements were occurred in cases of "Sad" and "Excitement" emotions, respectively. Left side facial movement was comparatively more occurred in case of "Contentment" emotion. In contrast, both left and right side movements were occurred in case of "Angry" emotion.

Diagnostic Value of Susceptibility-Weighted MRI in Differentiating Cerebellopontine Angle Schwannoma from Meningioma

  • Seo, Minkook;Choi, Yangsean;Lee, Song;Kim, Bum-soo;Jang, Jinhee;Shin, Na-Young;Jung, So-Lyung;Ahn, Kook-Jin
    • Investigative Magnetic Resonance Imaging
    • /
    • v.24 no.1
    • /
    • pp.38-45
    • /
    • 2020
  • Background: Differentiation of cerebellopontine angle (CPA) schwannoma from meningioma is often a difficult process to identify. Purpose: To identify imaging features for distinguishing CPA schwannoma from meningioma and to investigate the usefulness of susceptibility-weighted imaging (SWI) in differentiating them. Materials and Methods: Between March 2010 and January 2015, this study pathologically confirmed 11 meningiomas and 20 schwannomas involving CPA with preoperative SWI were retrospectively reviewed. Generally, the following MRI features were evaluated: 1) maximal diameter on axial image, 2) angle between tumor border and adjacent petrous bone, 3) presence of intratumoral dark signal intensity on SWI, 4) tumor consistency, 5) blood-fluid level, 6) involvement of internal auditory canal (IAC), 7) dural tail, and 8) involvement of adjacent intracranial space. On CT, 1) presence of dilatation of IAC, 2) intratumoral calcification, and 3) adjacent hyperostosis were evaluated. All features were compared using Chi-squared tests and Fisher's exact tests. The univariate and multivariate logistic regression analysis were performed to identify imaging features that differentiate both tumors. Results: The results noted that schwannomas more frequently demonstrated dark spots on SWI (P = 0.025), cystic consistency (P = 0.034), and globular angle (P = 0.008); schwannomas showed more dilatation of internal auditory meatus and lack of calcification (P = 0.008 and P = 0.02, respectively). However, it was shown that dural tail was more common in meningiomas (P < 0.007). In general, dark spots on SWI and dural tail remained significant in multivariate analysis (P = 0.037 and P = 0.012, respectively). In this case, the combination of two features showed a sensitivity and specificity of 80% and 100% respectively, with an area under the receiver operating characteristic curve of 0.9. Conclusion: In conclusion, dark spots on SWI were found to be helpful in differentiating CPA schwannoma from meningioma. It is noted that combining dural tail with dark spots on SWI yielded strong diagnostic value in differentiating both tumors.

Principles of Intraoperative Neurophysiological Monitoring with Insertion and Removal of Electrodes (수술 중 신경계감시검사에서 검사에 따른 전극의 삽입 및 제거방법)

  • Lim, Sung Hyuk;Park, Soon Bu;Moon, Dae Young;Kim, Jong Sik;Choi, Young Doo;Park, Sang Ku
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.51 no.4
    • /
    • pp.453-461
    • /
    • 2019
  • Intraoperative neurophysiological monitoring (INM) examination identifies the damage caused to the nervous system during surgery. This method is applied in various surgeries to validate the procedure being performed, and proceed with confidence. The assessment is conducted in an operating room, using subdermal needle electrodes to optimize the examination. There are no textbooks or guides for the correct stimuli and recording areas for the surgical laboratory test. This article provides a detailed description of the correct stimuli and recording parts in motor evoked potential (MEP), somatosensory evoked potential (SSEP), brainstem auditory evoked potentials (BAEP) and visual evoked potentials (VEP). Free-running Electromyography (EMG) is an observation of the EMG that occurs in the muscle, wherein the functional state of most cranial nerves and spinal nerve roots is determined. In order to help understand the test, an image depicting the inserting subdermal needle electrodes into each of the muscles, is attached. Furthermore, considering both the patient and the examiner, a safe method is suggested for removal of electrodes after conclusion of the test.

A Study on the Costume and the inner Symbolic Meaning expressed in the Stanley Kubrick's film (스탠리 큐브릭의 영화 <로리타(1962)>에 나타난 의상의 상징성에 관한 연구)

  • Kim, Hye-Jeong;Lee, Sang-Rye
    • Journal of Fashion Business
    • /
    • v.13 no.1
    • /
    • pp.152-166
    • /
    • 2009
  • By virtue of the development of mass media, the cinema, the composite space art taking the visual and auditory elements together, exhibits the actual life of the realities, thereby having a mutually close relationship to social, cultural and economic fields and continuing to generate the fashion code as well as reflecting the image of the times. Especially, fashion style in movies delivers their image and atmosphere and becomes the means for containing the personality, spiritual world and inner thinking of the characters in the movie and inducing its plot. Therefore, this study was intended to make clear that fashion fuses and shares with a diversity of genres such as movies and the like, becomes the cultural model that proceeds to create a new culture in relation to daily life and induces and presents the trend of contemporary fashion. For this purpose, this study attempted to analyze fashion style in the movie. Lolita is the fiction published by the Russian?American writer Vladimir Nabokov($1899{\sim}1977$) in 1954. It is the fiction that portrays the unethical love between Humbert, a middleaged man, and Lolita, a girl in her 10s. It was cinematized by the director Stanley Kubrick for the first time in 1962 and revived by the movie director Adrian Lyne in 1997. The character of Lolita has a younger look like a girl and looks immature in the movie directed by the movie director Stanley Kubrick and the movie director Adrian Lyne. But the character of Lolita has the commonality that she showed an incomplete female image of having a sexually freewheeling thinking. Thereby, this study sought to prove that the created fashion style of the character in the film not only became the clue to enable us to know the time and space background in the film but also helped the film develop effectively by performing a role of portraying the character in the movie. And it attempted to present that it becomes both the foundation for leading the fashion trend shown in contemporary fashion and the code of mass culture. Fashion style of Lolita in the movie appears to be reflected diversely in mass culture as well as fashion style in the contemporary times.

Real-Time Stereoscopic Visualization of Very Large Volume Data on CAVE (CAVE상에서의 방대한 볼륨 데이타의 실시간 입체 영상 가시화)

  • 임무진;이중연;조민수;이상산;임인성
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.679-691
    • /
    • 2002
  • Volume visualization is an important subarea of scientific visualization, and is concerned with techniques that are effectively used in generating meaningful and visual information from abstract and complex volume datasets, defined in three- or higher-dimensional space. It has been increasingly important in various fields including meteorology, medical science, and computational fluid dynamics, and so on. On the other hand, virtual reality is a research field focusing on various techniques that aid gaining experiences in virtual worlds with visual, auditory and tactile senses. In this paper, we have developed a visualization system for CAVE, an immersive 3D virtual environment system, which generates stereoscopic images from huge human volume datasets in real-time using an improved volume visualization technique. In order to complement the 3D texture-mapping based volume rendering methods, that easily slow down as data sizes increase, our system utilizes an image-based rendering technique to guarantee real-time performance. The system has been designed to offer a variety of user interface functionality for effective visualization. In this article, we present detailed description on our real-time stereoscopic visualization system, and show how the Visible Korean Human dataset is effectively visualized on CAVE.

DECODE: A Novel Method of DEep CNN-based Object DEtection using Chirps Emission and Echo Signals in Indoor Environment (실내 환경에서 Chirp Emission과 Echo Signal을 이용한 심층신경망 기반 객체 감지 기법)

  • Nam, Hyunsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.59-66
    • /
    • 2021
  • Humans mainly recognize surrounding objects using visual and auditory information among the five senses (sight, hearing, smell, touch, taste). Major research related to the latest object recognition mainly focuses on analysis using image sensor information. In this paper, after emitting various chirp audio signals into the observation space, collecting echoes through a 2-channel receiving sensor, converting them into spectral images, an object recognition experiment in 3D space was conducted using an image learning algorithm based on deep learning. Through this experiment, the experiment was conducted in a situation where there is noise and echo generated in a general indoor environment, not in the ideal condition of an anechoic room, and the object recognition through echo was able to estimate the position of the object with 83% accuracy. In addition, it was possible to obtain visual information through sound through learning of 3D sound by mapping the inference result to the observation space and the 3D sound spatial signal and outputting it as sound. This means that the use of various echo information along with image information is required for object recognition research, and it is thought that this technology can be used for augmented reality through 3D sound.