• Title/Summary/Keyword: Color Similarity

Search Result 390, Processing Time 0.026 seconds

Implementation of Image Retrieval System using Complex Image Features (복합적인 영상 특성을 이용한 영상 검색 시스템 구현)

  • 송석진;남기곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1358-1364
    • /
    • 2002
  • Presently, Multimedia data are increasing suddenly in broadcasting and internet fields. For retrieval of still images in multimedia database, content-based image retrieval system is implemented in this paper that user can retrieve similar objects from image database after choosing a wanted query region of object. As to extract color features from query image, we transform color to HSV with proposed method that similarity is obtained it through histogram intersection with database images after making histogram. Also, query image is transformed to gray image and induced to wavelet transformation by which spatial gray distribution and texture features are extracted using banded autocorrelogram and GLCM before having similarity values. And final similarity values is determined by adding two similarity values. In that, weight value is applied to each similarity value. We make up for defects by taking color image features but also gray image features from query image. Elevations of recall and precision are verified in experiment results.

Detecting Similar Designs Using Deep Learning-based Image Feature Extracting Model (딥러닝 기반 이미지 특징 추출 모델을 이용한 유사 디자인 검출에 대한 연구)

  • Lee, Byoung Woo;Lee, Woo Chang;Chae, Seung Wan;Kim, Dong Hyun;Lee, Choong Kwon
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.162-169
    • /
    • 2020
  • Design is a key factor that determines the competitiveness of products in the textile and fashion industry. It is very important to measure the similarity of the proposed design in order to prevent unauthorized copying and to confirm the originality. In this study, a deep learning technique was used to quantify features from images of textile designs, and similarity was measured using Spearman correlation coefficients. To verify that similar samples were actually detected, 300 images were randomly rotated and color changed. The results of Top-3 and Top-5 in the order of similarity value were measured to see if samples that rotated or changed color were detected. As a result, the VGG-16 model recorded significantly higher performance than did AlexNet. The performance of the VGG-16 model was the highest at 64% and 73.67% in the Top-3 and Top-5, where similarity results were high in the case of the rotated image. appear. In the case of color change, the highest in Top-3 and Top-5 at 86.33% and 90%, respectively.

Ocean Disaster Detection System(OD2S) using Geostationary Ocean Color Imager(GOCI) (천리안해양관측위성을 활용한 해양 재난 검출 시스템)

  • Yang, Hyun;Ryu, Jeung-Mi;Han, Hee-Jeong;Ryu, Joo-Hyung;Park, Young-Je
    • Journal of Information Technology Services
    • /
    • v.11 no.sup
    • /
    • pp.177-189
    • /
    • 2012
  • We developed the ocean disaster detection system(OD2S) which copes with the occurrences of ocean disasters (e. g. the red and green tide, the oil spill, the typhoon, and the sea ice) by converging and integrating the ocean color remote sensing using the satellite and the information technology exploiting the mass data processing and the pattern recognitions. This system which is based on the cosine similarity detects the ocean disasters in real time. The existing ocean color sensors which are operated in the polar orbit platforms cannot conduct the real time observation of ocean environments because they support the low temporal resolutions of one observation a day. However, geostationary ocean color imager(GOCI), the first geostationary ocean color sensor in the world, produces the ocean color images(e. g. the chlorophyll, the colored dissolved organic matter(CDOM), and the total suspended solid(TSS)), with high temporal resolutions of hourly intervals up to eight observations a day. The evaluation demonstrated that the OD2S can detect the excessive concentration of chlorophyll, CDOM, and TSS. Based on these results, it is expected that OD2S detects the ocean disasters in real time.

The Generation of SPOT True Color Image Using Neural Network Algorithm

  • Chen, Chi-Farn;Huang, Chih-Yung
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.940-942
    • /
    • 2003
  • In an attempt to enhance the visual effect of SPOT image, this study develops a neural network algorithm to transform SPOT false color into simulated true color. The method has been tested using Landsat TM and SPOT images. The qualitative and quantitative comparisons indicate that the striking similarity can be found between the true and simulated true images in terms of the visual looks and the statistical analysis.

  • PDF

A Study on Efficient FPS Game Operation Using Attention NPC Extraction (관심 NPC 추출을 이용한 효율적인 FPS 게임 운영에 관한 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.2
    • /
    • pp.63-69
    • /
    • 2017
  • The extraction of attention NPC in a FPS game has emerged as a very significant issue. We propose an efficient FPS game operation method, using the attention NPC extraction with a simple arithmetic. First, we define the NPC, using the color histogram interaction and texture similarity in the block to determine the attention NPC. Next, we use the histogram of movement distribution and frequency of movement of the NPC. Becasue, except for the block boundary according to the texture and to extract only the boundaries of the object block. The edge strength is defined to have high values at the NPC object boundaries, while it is designed to have relatively low values at the NPC texture boundaries or in interior of a region. The region merging method also adopts the color histogram intersection technique in order to use color distribution in each region. Through the experiment, we confirmed that NPC has played a crucial role in the FPS game and as a result it draws more speed and strategic actions in the game.

Object-based Image Retrieval Using Dominant Color Pair and Color Correlogram (Dominant 컬러쌍 정보와 Color Correlogram을 이용한 객체기반 영상검색)

  • 박기태;문영식
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.2
    • /
    • pp.1-8
    • /
    • 2003
  • This paper proposes an object-based image retrieval technique based on the dominant color pair information. Most of existing methods for content based retrieval extract the features from an image as a whole, instead of an object of interest. As a result, the retrieval performance tends to degrade due to the background colors. This paper proposes an object based retrieval scheme, in which an object of interest is used as a query and the similarity is measured on candidate regions of DB images where the object may exist. From the segmented image, the dominant color pair information between adjacent regions is used for selecting candidate regions. The similarity between the query image and DB image is measured by using the color correlogram technique. The dominant color pair information is robust against translation, rotation, and scaling. Experimental results show that the performance of the proposed method has been improved by reducing the errors caused by background colors.

Design and Implementation of a Content-based Color Image Retrieval System based on Color -Spatial Feature (색상-공간 특징을 사용한 내용기반 칼라 이미지 검색 시스템의 설계 및 구현)

  • An, Cheol-Ung;Kim, Seung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.628-638
    • /
    • 1999
  • In this paper, we presents a method of retrieving 24 bpp RGB images based on color-spatial features. For each image, it is subdivided into regions by using similarity of color after converting RGB color space to CIE L*u*v* color space that is perceptually uniform. Our segmentation algorithm constrains the size of region because a small region is discardable and a large region is difficult to extract spatial feature. For each region, averaging color and center of region are extracted to construct color-spatial features. During the image retrieval process, the color and spatial features of query are compared with those of the database images using our similarity measure to determine the set of candidate images to be retrieved. We implement a content-based color image retrieval system using the proposed method. The system is able to retrieve images by user graphic or example image query. Experimental results show that Recall/Precision is 0.80/0.84.

Semantic Image Retrieval Using Color Distribution and Similarity Measurement in WordNet (컬러 분포와 WordNet상의 유사도 측정을 이용한 의미적 이미지 검색)

  • Choi, Jun-Ho;Cho, Mi-Young;Kim, Pan-Koo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.509-516
    • /
    • 2004
  • Semantic interpretation of image is incomplete without some mechanism for understanding semantic content that is not directly visible. For this reason, human assisted content-annotation through natural language is an attachment of textual description to image. However, keyword-based retrieval is in the level of syntactic pattern matching. In other words, dissimilarity computation among terms is usually done by using string matching not concept matching. In this paper, we propose a method for computerized semantic similarity calculation In WordNet space. We consider the edge, depth, link type and density as well as existence of common ancestors. Also, we have introduced method that applied similarity measurement on semantic image retrieval. To combine wi#h the low level features, we use the spatial color distribution model. When tested on a image set of Microsoft's 'Design Gallery Line', proposed method outperforms other approach.

Perceptual Color Difference based Image Quality Assessment Method and Evaluation System according to the Types of Distortion (인지적 색 차이 기반의 이미지 품질 평가 기법 및 왜곡 종류에 따른 평가 시스템 제안)

  • Lee, Jee-Yong;Kim, Young-Jin
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1294-1302
    • /
    • 2015
  • A lot of image quality assessment metrics that can precisely reflect the human visual system (HVS) have previously been researched. The Structural SIMilarity (SSIM) index is a remarkable HVS-aware metric that utilizes structural information, since the HVS is sensitive to the overall structure of an image. However, SSIM fails to deal with color difference in terms of the HVS. In order to solve this problem, the Structural and Hue SIMilarity (SHSIM) index has been selected with the Hue, Saturation, Intensity (HSI) model as a color space, but it cannot reflect the HVS-aware color difference between two color images. In this paper, we propose a new image quality assessment method for a color image by using a CIE Lab color space. In addition, by using a support vector machine (SVM) classifier, we also propose an optimization system for applying optimal metric according to the types of distortion. To evaluate the proposed index, a LIVE database, which is the most well-known in the area of image quality assessment, is employed and four criteria are used. Experimental results show that the proposed index is more consistent with the other methods.

Analysis of Voice Color Similarity for the development of HMM Based Emotional Text to Speech Synthesis (HMM 기반 감정 음성 합성기 개발을 위한 감정 음성 데이터의 음색 유사도 분석)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5763-5768
    • /
    • 2014
  • Maintaining a voice color is important when compounding both the normal voice because an emotion is not expressed with various emotional voices in a single synthesizer. When a synthesizer is developed using the recording data of too many expressed emotions, a voice color cannot be maintained and each synthetic speech is can be heard like the voice of different speakers. In this paper, the speech data was recorded and the change in the voice color was analyzed to develop an emotional HMM-based speech synthesizer. To realize a speech synthesizer, a voice was recorded, and a database was built. On the other hand, a recording process is very important, particularly when realizing an emotional speech synthesizer. Monitoring is needed because it is quite difficult to define emotion and maintain a particular level. In the realized synthesizer, a normal voice and three emotional voice (Happiness, Sadness, Anger) were used, and each emotional voice consists of two levels, High/Low. To analyze the voice color of the normal voice and emotional voice, the average spectrum, which was the measured accumulated spectrum of vowels, was used and the F1(first formant) calculated by the average spectrum was compared. The voice similarity of Low-level emotional data was higher than High-level emotional data, and the proposed method can be monitored by the change in voice similarity.