• Title/Summary/Keyword: Color frequency

Search Result 743, Processing Time 0.036 seconds

A Basic Study on the Conversion of Color Image into Musical Elements based on a Synesthetic Perception (공감각인지기반 컬러이미지-음악요소 변환에 관한 기초연구)

  • Kim, Sung-Il
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.187-194
    • /
    • 2013
  • The final aim of the present study is to build a system of converting a color image into musical elements based on a synesthetic perception, emulating human synesthetic skills, which make it possible to associate a color image with a specific sound. This can be done on the basis of the similarities between physical frequency information of both light and sound. As a first step, an input true color image is converted into hue, saturation, and intensity domains based on a color model conversion theory. In the next step, musical elements including note, octave, loudness, and duration are extracted from each domain of the HSI color model. A fundamental frequency (F0) is then extracted from both hue and intensity histograms. The loudness and duration are extracted from both intensity and saturation histograms, respectively. In experiments, the proposed system on the conversion of a color image into musical elements was implemented using standard C and Microsoft Visual C++(ver. 6.0). Through the proposed system, the extracted musical elements were synthesized to finally generate a sound source in a WAV file format. The simulation results revealed that the musical elements, which were extracted from an input RGB color image, reflected in its output sound signals.

  • PDF

A Basic Study on the Conversion of Sound into Color Image using both Pitch and Energy

  • Kim, Sung-Ill
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.101-107
    • /
    • 2012
  • This study describes a proposed method of converting an input sound signal into a color image by emulating human synesthetic skills which make it possible to associate an sound source with a specific color image. As a first step of sound-to-image conversion, features such as fundamental frequency(F0) and energy are extracted from an input sound source. Then, a musical scale and an octave can be calculated from F0 signals, so that scale, energy and octave can be converted into three elements of HSI model such hue, saturation and intensity, respectively. Finally, a color image with the BMP file format is created as an output of the process of the HSI-to-RGB conversion. We built a basic system on the basis of the proposed method using a standard C-programming. The simulation results revealed that output color images with the BMP file format created from input sound sources have diverse hues corresponding to the change of the F0 signals, where the hue elements have different intensities depending on octaves with the minimum frequency of 20Hz. Furthermore, output images also have various levels of chroma(or saturation) which is directly converted from the energy.

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.

Color Similarity Definition Based on Quantized Color Histogram for Clothing Identification

  • Choi, Yoo-Joo;Moon, Nam-Mee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.396-399
    • /
    • 2009
  • In this paper, we present a method to define a color similarity between color images using Octree-based quantization and similar color integration. The proposed method defines major colors from each image using Octree-based quantization. Two color palettes to consist of major colors are compared based on Euclidean distance and similar color bins between palettes are matched. Multiple matched color bins are integrated and major colors are adjusted. Color histogram based on the color palette is constructed for each image and the difference between two histograms is computed by the weighted Euclidean distance between the matched color bins in consideration of the frequency of each bin. As an experiment to validate the usefulness, we discriminated the same clothing from CCD camera images based on the proposed color similarity analysis. We retrieved the same clothing images with the success rate of 88 % using only color analysis without texture analysis.

  • PDF

The potentiality of color preference analysis by EEG (뇌파분석 통한 색상의 선호도 분석 가능성)

  • Kim, Min-Kyung;Ryu, Hee-Wook
    • Science of Emotion and Sensibility
    • /
    • v.14 no.2
    • /
    • pp.311-320
    • /
    • 2011
  • To quantitatively analyze the effects of color stimulation which is one of the major affecting factors on human emotion, we studied the relationship between color preference and the Electroencephalography (EEG) to 3 color stimuli; bright yellow red (BYR), deep green yellow (DGY), and vivid blue (VB). Physiological signal measured by EEG on the color stimulation was closely related with their well-known colorful images. The brain become more activated with decreasing the color temperature (BYR${\geq}$DGY>VB), and the right brain is more sensitive than the left. On the whole, the EEG values of the frequency bands are in order to beta ${\geq}$ theta and alpha > gamma. As decreasing the color temperature, beta wave increased (BYR${\geq}$DGY>VB), and alpha, beta and gamma waves increased with increasing the color temperature (BYR${\geq}$DGY>VB). The relationship between the color preference and EEG values showed EEG gets more activated at some frequency bands when the color preference becomes higher. In conclusion, the specific frequency band could be activating by a color stimuli which had showed higher the preference. It means that these color stimuli can apply for various industries such as beauty industry, interior design, fashion design, color therapy, and etc.

  • PDF

Edge-based spatial descriptor for content-based Image retrieval (내용 기반 영상 검색을 위한 에지 기반의 공간 기술자)

  • Kim, Nac-Woo;Kim, Tae-Yong;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.1-10
    • /
    • 2005
  • Content-based image retrieval systems are being actively investigated owing to their ability to retrieve images based on the actual visual content rather than by manually associated textual descriptions. In this paper, we propose a novel approach for image retrieval based on edge structural features using edge correlogram and color coherence vector. After color vector angle is applied in the pre-processing stage, an image is divided into two image parts (high frequency image and low frequency image). In low frequency image, the global color distribution of smooth pixels is extracted by color coherence vector, thereby incorporating spatial information into the proposed color descriptor. Meanwhile, in high frequency image, the distribution of the gray pairs at an edge is extracted by edge correlogram. Since the proposed algorithm includes the spatial and edge information between colors, it can robustly reduce the effect of the significant change in appearance and shape in image analysis. The proposed method provides a simple and flexible description for the image with complex scene in terms of structural features of the image contents. Experimental evidence suggests that our algorithm outperforms the recently histogram refinement methods for image indexing and retrieval. To index the multidimensional feature vectors, we use R*-tree structure.

Color Analysis of Glasses Cases of the Middle and Late Joseon Dynasty, by Materials (조선 중.후기 안경집의 소재에 따른 색채 특성)

  • Lee, Young-Kyung;Kim, Young-In
    • Journal of the Korean Society of Costume
    • /
    • v.58 no.4
    • /
    • pp.35-46
    • /
    • 2008
  • The purpose of this study was to closely examine the history of glasses and their cases used in the middle and late of Joseon Dynasty and identify inherent quality of our traditional glasses cases through color analyses of glasses cases' material and shape. While theoretical examination was conducted based on the literatures of glasses and their cases that firstly appeared in around Japanese Invasion (Imjin war) of Korea in 1592, practical analyses were demonstrated on photos of glasses cases used in the middle and late of Joseon Dynasty collected from both museum pieces and the internet which were grouped into wood, fabric, paper, sharkskin, hawksbill and cow's horn in accordance with their materials. 623 color samples were abstracted from collected 159 glasses cases and quantity analyses on each material were performed respectively. Abstracted representative colors based on the result of color analyses were classified into the main materials and accessories' color scheme. The result of this study are as follow: firstly, both Yellow and Yellow Red were mostly used in main materials. In Fabric case's colors were widely used in embroidery and in animal matter material cases such as sharkskin, hawksbill and cow's horn, which can be used as itself or dyed, Green Yellow shown in high frequency. Secondly, accessories were analyzed into similarity coloration with main materials. From this finding, it turns out that our traditional cases have characteristic of similarity coloration between main materials and accessories. Red Purple and Purple Blue in high frequency in accessories used as an accent color. Finally, based on the analysis of hue and tone, while the middle and low value colors shown in very high frequency, the high-chroma colors hardly shown.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

Color preference of preschool children for the paper and for furniture (색지와 색가구를 통해 본 취학전 아동의 색채선호 경향에 관한 연구)

  • 이연숙
    • Journal of the Korean Home Economics Association
    • /
    • v.23 no.3
    • /
    • pp.137-147
    • /
    • 1985
  • The purpose of this study was to investigate 1) color concept development in preschool children, 2) general color preferences using colored parers, 3) specific color preference using colored chairs, and 4) to determine the relationships of sex and age to general color preference. The present experiment with materials developed through the pilot experiment, was conducted with 70 3, 4, and 5-year-old children attending the Child development research institute of Yonsei Univ. as subjects. Data were analyzed with SAS package using frequency, percentage, $\chi$\sup 2\-test, C\sup 2\ and visualized by SAS graph with tektronix 4113.

  • PDF

Temporal-perceptual Judgement of Visuo-Auditory Stimulation (시청각 자극의 시간적 인지 판단)

  • Yu, Mi;Lee, Sang-Min;Piao, Yong-Jun;Kwon, Tae-Kyu;Kim, Nam-Gyun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.1 s.190
    • /
    • pp.101-109
    • /
    • 2007
  • In situations of spatio-temporal perception about visuo-auditory stimulus, researches propose optimal integration hypothesis that perceptual process is optimized to the interaction of the senses for the precision of perception. So, when the visual information considered generally dominant over any other sense is ambiguous, the information of the other sense like auditory stimulus influences the perceptual process in interaction with visual information. Thus, we performed two different experiments to certain the conditions of the interacting senses and influence of the condition. We consider the interaction of the visuo-auditory stimulation in the free space, the color of visual stimulus and sex difference of testee with normal people. In first experiment, 12 participants were asked to judge the change in the frequency of audio-visual stimulation using a visual flicker and auditory flutter stimulation in the free space. When auditory temporal cues were presented, the change in the frequency of the visual stimulation was associated with a perceived change in the frequency of the auditory stimulation as the results of the previous studies using headphone. In second experiment, 30 male and 30 female were asked to judge the change in the frequency of audio-visual stimulation using a color of visual flicker and auditory flutter stimulation. In the color condition using red and green. Both male and female testees showed same perceptual tendency. male and female testees showed same perceptual tendency however, in case of female, the standard deviation is larger than that of male. This results implies that audio-visual asymmetry effects are influenced by the cues of visual and auditory information, such as the orientation between auditory and visual stimulus, the color of visual stimulus.