• Title/Summary/Keyword: 컬러 변환 모델

Search Result 75, Processing Time 0.019 seconds

Real-time Implementation of Sound into Color Conversion System Based on the Colored-hearing Synesthetic Perception (색-청 공감각 인지 기반 사운드-컬러 신호 실시간 변환 시스템의 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.12
    • /
    • pp.8-17
    • /
    • 2015
  • This paper presents a sound into color signal conversion using a colored-hearing synesthesia. The aim of the present paper is to implement a real-time conversion system which focuses on both hearing and sight which account for a great part of bodily senses. The proposed method of the real-time conversion of color into sound, in this paper, was simple and intuitive where scale, octave and velocity were extracted from MIDI input signals, which were converted into hue, intensity and saturation, respectively, as basic elements of HSI color model. In experiments, we implemented both the hardware system for delivering MIDI signals to PC and the VC++ based software system for monitoring both input and output signals, so we made certain that the conversion was correctly performed by the proposed method.

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.

A Basic Study on the Pitch-based Sound into Color Image Conversion (피치 기반 사운드-컬러이미지 변환에 관한 기초연구)

  • Kang, Kun-Woo;Kim, Sung-Ill
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.231-238
    • /
    • 2012
  • This study aims for building an application system of converting sound into color image based on synesthetic perception. As the major features of input sound, both scale and octave elements extracted from F0(fundamental frequency) were converted into both hue and intensity elements of HSI color model, respectively. In this paper, we used the fixed saturation value as 0.5. On the basis of color model conversion theory, the HSI color model was then converted into the RGB model, so that a color image of the BMP format was finally created. In experiments, the basic system was implemented on both software and hardware(TMS320C6713 DSP) platforms based on the proposed sound-color image conversion method. The results revealed that diverse color images with different hues and intensities were created depending on scales and octaves extracted from the F0 of input sound signals. The outputs on the hardware platform were also identical to those on the software platform.

  • PDF

The Structure and Feature of Color Appearance Models (컬러 어피어런스 모델의 구조 및 특성)

  • Heo, T.W.;Kim, J.S.;Cho, M.S.
    • Electronics and Telecommunications Trends
    • /
    • v.17 no.6 s.78
    • /
    • pp.173-181
    • /
    • 2002
  • 컬러 디스플레이의 색재현 특성을 좋게 하는 것은 전세계 소비자들의 공통된 바램이다. 이를 위해서, 컬러 재현 장치에서 장치 독립적인 컬러 이미징 기술의 개발이 필요하다. 이를 뒷받침하는 기술은 컬러 어피어런스 모델(color appearance models)을 이용한 컬러의 재현능력 향상인 것이다. 따라서, 본 고에서는 컬러 어피어런스 모델의 최신 기술 동향, 색순응 변환에 있어서의 최신 기술 및 컬러의 차이를 정확히 나타내는 색오차식의 최신 기술을 소개하고자 한다.

A Basic Study on the Conversion of Color Image into Musical Elements based on a Synesthetic Perception (공감각인지기반 컬러이미지-음악요소 변환에 관한 기초연구)

  • Kim, Sung-Il
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.187-194
    • /
    • 2013
  • The final aim of the present study is to build a system of converting a color image into musical elements based on a synesthetic perception, emulating human synesthetic skills, which make it possible to associate a color image with a specific sound. This can be done on the basis of the similarities between physical frequency information of both light and sound. As a first step, an input true color image is converted into hue, saturation, and intensity domains based on a color model conversion theory. In the next step, musical elements including note, octave, loudness, and duration are extracted from each domain of the HSI color model. A fundamental frequency (F0) is then extracted from both hue and intensity histograms. The loudness and duration are extracted from both intensity and saturation histograms, respectively. In experiments, the proposed system on the conversion of a color image into musical elements was implemented using standard C and Microsoft Visual C++(ver. 6.0). Through the proposed system, the extracted musical elements were synthesized to finally generate a sound source in a WAV file format. The simulation results revealed that the musical elements, which were extracted from an input RGB color image, reflected in its output sound signals.

  • PDF

A Basic Study on the System of Converting Color Image into Sound (컬러이미지-소리 변환 시스템에 관한 기초연구)

  • Kim, Sung-Ill;Jung, Jin-Seung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.2
    • /
    • pp.251-256
    • /
    • 2010
  • This paper aims for developing the intelligent robot emulating human synesthetic skills which associate a color image with sound, so that we are able to build an application system based on the principle of mutual conversion between color image and sound. As the first step, in this study, we have tried to realize a basic system using the color image to sound conversion. This study describes a new conversion method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. In addition, we present the method of converting color image into sound using color model conversion as well as histograms in the converted color model. In the basis of the method proposed in this study, we built a basic system using Microsoft Visual C++(ver. 6.0). The simulation results revealed that the hue, saturation and intensity elements of a input color image were converted into F0, harmonic and octave elements of a sound, respectively. The converted sound elements were synthesized to generate a sound source with WAV file format using Csound toolkit.

The System of Converting Muscular Sense into both Color and Sound based on the Synesthetic Perception (공감각인지 기반 근감각신호에서 색·음으로의 변환 시스템)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.462-469
    • /
    • 2014
  • As a basic study on both engineering applications and representation methods of synesthesia, this paper aims at building basic system which converts a muscular sense into both visual and auditory elements. As for the building method, data of the muscular sense can be acquired through roll and pitch signals which are calculated from both three-axis acceleration sensor and the two-axis gyro sensor. The roll and pitch signals are then converted into both visual and auditory information as outputs. The roll signals are converted into both intensity elements of the HSI color model and octaves as one of auditory elements. In addition, the pitch signals are converted into both hue elements of the HSI color model and scales as another one of auditory elements. Each of the extracted elements of the HSI color model is converted into each of the three elements of the RGB color model respectively, so that the real-time output color signals can be obtained. Octaves and scales are also converted and synthesized into MIDI signals, so that the real-time sound signals can be obtained as anther one of output signals. In experiments, the results revealed that normal color and sound output signals were successfully obtained from roll and pitch values that represent muscular senses or physical movements, depending on the conversion relationship based on the similarity between color and sound.

An Algorithm to Transform RDF Models into Colored Petri Nets (RDF 모델을 컬러 페트리 넷으로 변환하는 알고리즘)

  • Yim, Jae-Geol;Gwon, Ki-Young;Joo, Jae-Hun;Lee, Kang-Jai
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.1
    • /
    • pp.173-181
    • /
    • 2009
  • This paper proposes an algorithm to transform RDF(Resource Description Framework) models for ontology into CPN(Colored Petri Net) models. The algorithm transforms the semantics of the RDF model into the topology of the CPN by mapping the classes and the properties of the RDF onto the places of the CPN model then reflects the RDF statements on the CPN by representing the relationships between them as token transitions on the CPN. The basic idea of reflecting the RDF statements on the CPN is to generate a token, which is an ordered pair consisting of two tokens (one from the place mapped into the subject and the other one from the place mapped into the object) and transfer it to the place mapped into the predicate. We have actually built CPN models for given RDF models on the CNPTools and inferred and extracted answers to the RDF queries on the CPNTools.

Implementation of ARM based Embedded System for Muscular Sense into both Color and Sound Conversion (근감각-색·음 변환을 위한 ARM 기반 임베디드시스템의 구현)

  • Kim, Sung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.427-434
    • /
    • 2016
  • This paper focuses on a real-time hardware processing by implementing the ARM Cortex-M4 based embedded system, using a conversion algorithm from a muscular sense to both visual and auditory elements, which recognizes rotations of a human body, directional changes and motion amounts out of human senses. As an input method of muscular sense, AHRS(Attitude Heading Reference System) was used to acquire roll, pitch and yaw values in real time. These three input values were converted into three elements of HSI color model such as intensity, hue and saturation, respectively. Final color signals were acquired by converting HSI into RGB color model. In addition, Three input values of muscular sense were converted into three elements of sound such as octave, scale and velocity, which were synthesized to give an output sound using MIDI(Musical Instrument Digital Interface). The analysis results of both output color and sound signals revealed that input signals of muscular sense were correctly converted into both color and sound in real time by the proposed conversion method.

Genetic Programming based Illumination Robust and Non-parametric Multi-colors Detection Model (밝기변화에 강인한 Genetic Programming 기반의 비파라미터 다중 컬러 검출 모델)

  • Kim, Young-Kyun;Kwon, Oh-Sung;Cho, Young-Wan;Seo, Ki-Sung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.780-785
    • /
    • 2010
  • This paper introduces GP(Genetic Programming) based color detection model for an object detection and tracking. Existing color detection methods have used linear/nonlinear transformatin of RGB color-model and improved color model for illumination variation by optimization or learning techniques. However, most of cases have difficulties to classify various of colors because of interference of among color channels and are not robust for illumination variation. To solve these problems, we propose illumination robust and non-parametric multi-colors detection model using evolution of GP. The proposed method is compared to the existing color-models for various colors and images with different lighting conditions.