• 제목/요약/키워드: digital sound contents

Search Result 104, Processing Time 0.031 seconds

Research Analysis on User's Acceptability of Digital Contents Distribution among Individuals (개인 간 저작물 유통을 위한 사용자의 수용성 조사 분석)

  • Sohn, Bang Yong;Suh, Hye Sun
    • Journal of Digital Convergence
    • /
    • v.14 no.1
    • /
    • pp.211-217
    • /
    • 2016
  • There have been gradually established paid using system on contents, such as sound source, webtoon etc, with which licences are systematically managed. However, rampant free sites still mostly relying on advertising revenue make difficulties on lots of contents developers and obstruct the protection of their resonable right. In this situation, we need systematic measures to protect copyright of authors and to maximize use of contents of users. Therefore, it is important to handle the convenience of digital contents distribution and the diversity of contents license(differentiating permission rate according to user's purpose, scope, service period etc), based on the need of contents users. This paper implies to guideline to install contents distribution platform of individuals and to apprehend the need and acceptability of users in order to activate digital contents transaction on individuals.

Influences of a Sound Design of Media Contents on Communication Effects - TV-CF Sound Using a BQ-TEST (영상음향의 사운드디자인설계가 커뮤니케이션 효과에 미치는 영향 - TV광고음향을 뇌 지수 분석기법으로 -)

  • Yoo, Whoi-Jong;Suh, Hyun-Ju;Moon, Nam-Mee
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.602-611
    • /
    • 2008
  • The sound design performed in the production of media contents, such as TV, movie, and CF, have been conducted through the experienced feeling of some experts in the aspect of auditory effects that communicates stories. Also, there have been few studies of the quantitative approach and verification to apply visual and auditory effects felt by users. This study is a non-equivalent control group pretest-posttest design and investigates the difference in communication effects in which the difference in a sound design in the production of media contents that affects users. This study analyzed the brain quotient (BQ) obtained by the measurement of brain waves during the watching of an experiment image (track A) designed by using a 60-second TV CF only and an experiment image (track B) designed by sound effects and music and investigated which sound design represents differences in communication effects for users. The results of this investigation can be summarized as follows: First, in the results of the comparison of the attention quotient (ATQ), which is the BQ of recognition effects, between A and B tracks, the track A showed a higher difference in activation than the track B. It can be analyzed that the sound design based on music showed higher levels in attention and concentration than that of the sound effect design. Second, in the results of the comparison of the emotional quotient (EQ), which is emotional effects, between A and B tracks, the track A represented a higher difference than the track B. It means that the sound design based on music showed higher contribution levels in emotional effects than that of the design based on sound effects. Third, in the results of the comparison of the left and right brain equivalent quotient (ACQ), which is memory activation effects, between A and B tracks, there were no significant differences. In the results of the experiments, although there are some constraints in TV CF based on the conventional theories in which sound effects based design affects strong concentration, and music based design affects emotional feeling, the music based design may present more effects in continued concentration. In addition, it was evident that the music based design showed higher effects in emotional aspects. However, it is necessary to continue the study by increasing the number of subjects for improving the little differences in ACQ. This study is useful to investigate the communication effects of the sound based design in media contents as a quantitative manner through measuring brain waves and expect the results of this study as the basic materials in the fields of sound production.

Implementation of ARM based Embedded System for Muscular Sense into both Color and Sound Conversion (근감각-색·음 변환을 위한 ARM 기반 임베디드시스템의 구현)

  • Kim, Sung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.427-434
    • /
    • 2016
  • This paper focuses on a real-time hardware processing by implementing the ARM Cortex-M4 based embedded system, using a conversion algorithm from a muscular sense to both visual and auditory elements, which recognizes rotations of a human body, directional changes and motion amounts out of human senses. As an input method of muscular sense, AHRS(Attitude Heading Reference System) was used to acquire roll, pitch and yaw values in real time. These three input values were converted into three elements of HSI color model such as intensity, hue and saturation, respectively. Final color signals were acquired by converting HSI into RGB color model. In addition, Three input values of muscular sense were converted into three elements of sound such as octave, scale and velocity, which were synthesized to give an output sound using MIDI(Musical Instrument Digital Interface). The analysis results of both output color and sound signals revealed that input signals of muscular sense were correctly converted into both color and sound in real time by the proposed conversion method.

Multimedia Contents System based on Repurposing and Transcoding (리퍼포징과 트랜스코딩에 기반한 멀티미디어 콘텐츠 시스템)

  • Lee, Hyun-Lee;Kim, Hye-Suk;Kim, Kyoung-Soo;Ceong, Hee-Taek
    • Journal of Digital Contents Society
    • /
    • v.11 no.2
    • /
    • pp.145-152
    • /
    • 2010
  • The study of application using multimedia contents have been worked out a lot. But those trial can be hardly to be applied because each social study curriculum has a little difference which is a peculiar properties in social study. So we try to find the most efficient study method making multimedia contents system with computer graphic design and animation and sound and movie. This study introduce the method of image transcoding in real time based on repurposing structure to make better use of a multimedia contents system. We assume that developed multimedia contents system could be a big help to increase study efficiency through graphic, sound, movie as well as to encourage student's motive and interest. It is mostly possible to use for creative, self-controlled and social study material which is used to study the subject with conversational and repetitive way. So we hope this study will be a good guide line of developing various contents in multimedia contents.

A Study on Sound Reproduction for Adaptive Mixed-Reality Space (적응형 혼합현실 체험공간을 위한 음향재현 기술에 관한 연구)

  • Park, Ji-Woong;Lee, Ho-Jin;Kwon, Soonil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.303-306
    • /
    • 2013
  • 실제공간체감을 극대화하기 위해 실제 물리적인 공간과 가상현실 공간을 융합하는 인터랙티브 아키텍쳐 기반 적응형 혼합현실 기술이 최근 연구되고 있다. 이러한 혼합현실 공간에서 동적인 사용자 위치에 따라 물리공간적 몰입감 증대를 위한 오디오 Sweet Spot 최적화 기술을 연구하였다. 이를 위해 주파수 대역 별 소리의 물리적 감쇠현상을 활용하여 주파수 별 오디오 신호 보상 전처리를 통해 동적인 사용자 위치에 원음과 동일한 음색의 오디오 Sweet Spot이 형성이 가능한지 실험한 결과 주파수 별 감쇠의 차이를 보정함으로써 원음 그대로의 음색이 재현될 수 있다는 것을 확인할 수 있었다.

System Integration for the Operation of Unmanned Audio Center based on AoIP

  • Lee, Jaeho;Hamacher, Alaric;Kwon, Soonchul;Lee, Seunghyun
    • International journal of advanced smart convergence
    • /
    • v.6 no.2
    • /
    • pp.1-8
    • /
    • 2017
  • Recently, the development of the information communication industry has made many changes in the industrial acoustic industry. Especially, it has a great influence on the change of system and equipment of acoustic system. Analog equipment is changing to digital equipment, and integrated control equipment makes it easier to operate and manage the sound system. However, the integrated control system currently on the market is only controllable for some devices. In this paper, we propose a new AoIP - based system configuration method, which enables the operation status monitoring, unmanned operation and self - diagnosis of equipment. As a result of the study, it is confirmed that the proposed system can be operated, monitored, and self - diagnosed at remote sites. It is expected that an AoIP- based sound system will be the industry standard in the future.

A Program for Korean Animation Sound Libraries (국내용 애니메이션 사운드 라이브러리 구축 방안)

  • Rhim, Young-Kyu
    • Cartoon and Animation Studies
    • /
    • s.15
    • /
    • pp.221-235
    • /
    • 2009
  • Most of the sounds used in animated films are artificially made. A large number of the sounds used are either actual sound recordings or diversely processed artificial sounds made with professional sound equipments such as synthesizers. One animation episode contains numerous amounts of sounds, resulting in significant sound production costs. These sounds have full potential to be reused in different films or animations, but in reality we fail to do so. This thesis discusses ways these sound sources can be acknowledged as added new values to the present market situation as a usable 'digital content'. The iTunes Music Store is an American Apple company product that is acknowledged as the most successful digital content distribution model at the time being. Its system's sound library has potential for application in the Korean sound industry. In result, this system allows the sound creator to connect directly to the online store and become the initiative content supplier. At the same time, the user can receive a needed content easily at a low price. The most important part in the construction of this system is the search engine, which allows users to search for data in short periods of time. The search engine will have to be made in a new manner that takes into consideration the characteristics of the Korean language. This thesis presents a device incorporating the Wiki System to allow users to search and build their own data bases to share with other users. Using this system as a base, the Korean animation sound library will provide development and growth in the sound source industry as a new digital sound content.

  • PDF

Audio Event Detection Using Deep Neural Networks (깊은 신경망을 이용한 오디오 이벤트 검출)

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kim, Ji-Hwan
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.183-190
    • /
    • 2017
  • This paper proposes an audio event detection method using Deep Neural Networks (DNN). The proposed method applies Feed Forward Neural Network (FFNN) to generate output probabilities of twenty audio events for each frame. Mel scale filter bank (FBANK) features are extracted from each frame, and its five consecutive frames are combined as one vector which is the input feature of the FFNN. The output layer of FFNN produces audio event probabilities for each input feature vector. More than five consecutive frames of which event probability exceeds threshold are detected as an audio event. An audio event continues until the event is detected within one second. The proposed method achieves as 71.8% accuracy for 20 classes of the UrbanSound8K and the BBC Sound FX dataset.

Nonverbal Expressions in New Media Art -Case Studies about Facial Expressions and Sound (뉴미디어 아트에 나타난 비언어적 표현 -표정과 소리의 사례연구를 중심으로)

  • Yoo, Mi;An, KyoungHee
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.10
    • /
    • pp.146-156
    • /
    • 2019
  • New media art moves out of place and time constraints, sublimates the benefits of technology into art, and presents a new way of communication with the audience. This paper analyses the tendency of nonverbal communication methods by analysing examples of facial expressions and sound used in new media art from early times. As a result, it can be seen that the digital paradigm in the new media art has a nonlinear thinking, which makes a perceptual reduction of immersion and dispersion. The facial expression in new media art made it possible not only to overcome the limit of space and time of various expressions through 'visual distortions, enlargement, and virtualisation', but also to enable new ways of communication to display facial parts combined or separated in the digital environment. The sound in new media art does not stay in auditory sense, but pursues multi-sensory and synesthesia by cooperating with visual and tactile, evolves by revealing characteristics of space expansion and sensibility and interaction of audience.

Natural 3D Lip-Synch Animation Based on Korean Phonemic Data (한국어 음소를 이용한 자연스러운 3D 립싱크 애니메이션)

  • Jung, Il-Hong;Kim, Eun-Ji
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.331-339
    • /
    • 2008
  • This paper presents the development of certain highly efficient and accurate system for producing animation key data for 3D lip-synch animation. The system developed herein extracts korean phonemes from sound and text data automatically and then computes animation key data using the segmented phonemes. This animation key data is used for 3D lip-synch animation system developed herein as well as commercial 3D facial animation system. The conventional 3D lip-synch animation system segments the sound data into the phonemes based on English phonemic system and produces the lip-synch animation key data using the segmented phoneme. A drawback to this method is that it produces the unnatural animation for Korean contents. Another problem is that this method needs the manual supplementary work. In this paper, we propose the 3D lip-synch animation system that can segment the sound and text data into the phonemes automatically based on Korean phonemic system and produce the natural lip-synch animation using the segmented phonemes.

  • PDF