• Title/Summary/Keyword: Complex images

Search Result 1,009, Processing Time 0.03 seconds

GLIBP: Gradual Locality Integration of Binary Patterns for Scene Images Retrieval

  • Bougueroua, Salah;Boucheham, Bachir
    • Journal of Information Processing Systems
    • /
    • v.14 no.2
    • /
    • pp.469-486
    • /
    • 2018
  • We propose an enhanced version of the local binary pattern (LBP) operator for texture extraction in images in the context of image retrieval. The novelty of our proposal is based on the observation that the LBP exploits only the lowest kind of local information through the global histogram. However, such global Histograms reflect only the statistical distribution of the various LBP codes in the image. The block based LBP, which uses local histograms of the LBP, was one of few tentative to catch higher level textural information. We believe that important local and useful information in between the two levels is just ignored by the two schemas. The newly developed method: gradual locality integration of binary patterns (GLIBP) is a novel attempt to catch as much local information as possible, in a gradual fashion. Indeed, GLIBP aggregates the texture features present in grayscale images extracted by LBP through a complex structure. The used framework is comprised of a multitude of ellipse-shaped regions that are arranged in circular-concentric forms of increasing size. The framework of ellipses is in fact derived from a simple parameterized generator. In addition, the elliptic forms allow targeting texture directionality, which is a very useful property in texture characterization. In addition, the general framework of ellipses allows for taking into account the spatial information (specifically rotation). The effectiveness of GLIBP was investigated on the Corel-1K (Wang) dataset. It was also compared to published works including the very effective DLEP. Results show significant higher or comparable performance of GLIBP with regard to the other methods, which qualifies it as a good tool for scene images retrieval.

Analysis of Image Similarity Index of Woven Fabrics and Virtual Fabrics - Application of Textile Design CAD System and Shuttle Loom - (직물과 가상소재의 화상 유사성 분석 연구 - 수직기 및 텍스타일 CAD시스템 활용 -)

  • Yoon, Jung-Won;Kim, Jong-Jun
    • Fashion & Textile Research Journal
    • /
    • v.15 no.6
    • /
    • pp.1010-1017
    • /
    • 2013
  • Current global textiles and fashion industries have gradually shifted focus to high value-added, high sensibility, and multi-functional products based on new human-friendliness and sustainable growth technologies. Textile design CAD systems have been developed in conjunction with computer hardware and software sector advances. This study compares the patterns or images of actual woven fabrics and virtual fabrics prepared with a textile design CAD system. In this study, several weave structures (such as fancy yarn weave and patterns) were prepared with a shuttle loom. The woven textile images were taken using a CCD camera. The same weave structure data and yarn data were fed into a textile design CAD system in order to simulate fabric images as similarly as possible. Similarity Index analysis methods allowed for an analysis of the index between the actual fabric specimen and the simulated image of the corresponding fabric. The results showed that repeated small pattern weaves provide superior similarity index values than those of a fancy yarn weave that indicate some irregularities due to fancy yarn attributes. A Complex Wavelet Structural Similarity(CW-SSIM) index resulted in a better index than other methods such as Multi-Scale(MS) SSIM, and Feature Similarity(FS) SSIM, across fabric specimen images. A correlation analysis of the similarity index based on an image analysis and a similarity evaluation by panel members was also implemented.

Image Enhancement Method Research for Face Detection (얼굴 검출을 위한 영상 향상 방법 연구)

  • Jun, In-Ja;Chung, Kyung-Yong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.10
    • /
    • pp.13-21
    • /
    • 2009
  • This paper describes research of image enhancement for detection of face area. Typical face recognition algorithms used fixed parameter filtering algorithms to optimize face images for the recognition process. A fixed filtering scheme introduces errors when applied to face images captured in various different environmental conditions. For acquiring face image of good quality from the image including complex background and illumination, we propose a method for image enhancement using the categories based on the image intensity values. When an image is acquired average values of image from sub-window are computed and then compared to training values that were computed during preprocessing. The category is selected and the most suitable image filter method is applied to the image. We used histogram equalization, and gamma correction filters with two different parameters, and then used the most suitable filter among those three. An increase in enrollment of filtered images was observed compared to enrollment rates of the original images.

An Analysis and Evaluation of Urban Landscapes Using Images Taken with a Fish-eye Lens (천공사진(天空寫眞)을 이용한 도시경관의 분석 및 평가)

  • Han Gab-Soo;Yoon Young-Hwal;Jo Hyun-Kil
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.33 no.4 s.111
    • /
    • pp.11-21
    • /
    • 2005
  • The purpose of this study was to analyze and evaluate landscape characteristics by classification of landscapes in Chuncheon. A system was developed to convert images taken with a fish-eye lens to panoramic pictures. Landscape characteristics were analyzed by appearance rate and area distribution rate of landscape elements on panorama picture. Landscape characteristics were analyzed according to the number of times landscape elements appeared and the amount of area that each element occupied in the panoramic picture. Each panoramic picture was classified into five types based on these landscape element factors. Landscape evaluation was carried out using dynamic images converted from picture by fish-eye lens. The results of this study can be summarized as follows. The urban landscape can be characterized by four essential factors: interconnectedness, nature, urban centrality and landscape scale. Five types of landscapes were determined: detached residential building landscape (type 1), street landscape with various elements (type 2), street landscape in the center of a city (type 3), landscape of housing complex (type 4), and landscape of green space (type 5). Type 5 had the highest degree of landscape satisfaction and the landscape satisfaction increased with the number of appearances of natural elements. The amount of peen space had a high relation with a landscape satisfaction.

THE ADAPTATION METHOD IN THE MONTE CARLO SIMULATION FOR COMPUTED TOMOGRAPHY

  • LEE, HYOUNGGUN;YOON, CHANGYEON;CHO, SEUNGRYONG;PARK, SUNG HO;LEE, WONHO
    • Nuclear Engineering and Technology
    • /
    • v.47 no.4
    • /
    • pp.472-478
    • /
    • 2015
  • The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

AQUACULTURE FACILITIES DETECTION FROM SAR AND OPTIC IMAGES

  • Yang, Chan-Su;Yeom, Gi-Ho;Cha, Young-Jin;Park, Dong-Uk
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.320-323
    • /
    • 2008
  • This study attempts to establish a system extracting and monitoring cultural grounds of seaweeds (lavers, brown seaweeds and seaweed fulvescens) and abalone on the basis of both KOMPSAT-2 and Terrasar-X data. The study areas are located in the northwest and southwest coast of South Korea, famous for coastal cultural grounds. The northwest site is in a high tidal range area (on the average, 6.1 min Asan Bay) and has laver cultural grounds for the most. An semi-automatic detection system of laver facilities is described and assessed for spacebome optic images. On the other hand, the southwest cost is most famous for seaweeds. Aquaculture facilities, which cover extensive portions of this area, can be subdivided into three major groups: brown seaweeds, capsosiphon fulvescens and abalone farms. The study is based on interpretation of optic and SAR satellite data and a detailed image analysis procedure is described here. On May 25 and June 2, 2008 the TerraSAR-X radar satellite took some images of the area. SAR data are unique for mapping those farms. In case of abalone farms, the backscatters from surrounding dykes allows for recognition and separation of abalone ponds from all other water-covered surfaces. But identification of seaweeds such as laver, brown seaweeds and seaweed fulvescens depends on the dampening effect due to the presence of the facilities and is a complex task because objects that resemble seaweeds frequently occur, particularly in low wind or tidal conditions. Lastly, fusion of SAR and optic spatial images is tested to enhance the detection of aquaculture facilities by using the panchromatic image with spatial resolution 1 meter and the corresponding multi-spectral, with spatial resolution 4 meters and 4 spectrum bands, from KOMPSAT-2. The mapping accuracy achieved for farms will be estimated and discussed after field verification of preliminary results.

  • PDF

Fabrication of Optically Encoded Images on Porous Silicon (다공성 실리콘을 이용한 암호화된 광학이미지 제작)

  • Koh, Young-Dae;Kim, Sung-Jin;Kim, Jong-Hyeon;Rheu, Seong-Ok;Bang, Hyeon-Seok;Jeong, Yun-Sik;Park, Bo-Kyeong;Sohn, Hong-Lae
    • Journal of the Korean Vacuum Society
    • /
    • v.17 no.1
    • /
    • pp.46-50
    • /
    • 2008
  • Optical images on the porous silicon exhibiting Febry-Perot fringe pattern have been prepared by using an electrochemical etching of p-type silicon wafer (boron-doped,<100> orientation, resistivity $0.8{\sim}1.2m{\Omega}-cm$) and beam projector. The images remained in the substrate displayed an optical images correlating to the optical pattern and could be useful for optical data storage. A decrease in the effective optical thickness of the Febry-Perot layers was observed, indicative of a change in refractive index induced by exposing of porous silicon to the white light. This provides the ability to fabricate complex optical encoding in the surface of silicon.

Automatic Extraction of 3-Dimensional Road Information Using Road Pavement Markings (도로 노면표지를 이용한 3차원 도로정보 자동추출)

  • Kim, Jin-Gon;Han, Dong-Yeub;Yu, Ki-Yun;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.12 no.4 s.31
    • /
    • pp.61-68
    • /
    • 2004
  • In this paper, we suggest an automatic technique to obtain 3-D road information in complex urban areas using road pavement markings. This method is composed of following three main steps. The first step is extracting the pavement markings from aerial images, the second one is matching the same pavement markings in two aerial images, and the last one is obtaining the 3-D coordinates of those using EOP(exterior orientation parameters) of aerial images. Here, we focus on the first and second step because the last step can be performed by using the well hewn collinearity condition equation. We used geometric properties and spatial relationships of the pavement markings to extract the lane line markings on the images and extracted arrow lane markings additionally using template matching. And then, we obtained 3-D coordinates of the road using relational matching for the pavement markings. In order to evaluate the accuracy of extraction, we did a visual inspection and compared the result of this technique with those measured by digital photogrammetric system.

  • PDF

Supernova Remnants in the UWISH2 survey: A preliminary report

  • Lee, Yong-Hyun;Koo, Bon-Chul
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.36 no.2
    • /
    • pp.115.2-115.2
    • /
    • 2011
  • UWISH2 (UKIRT Widefield Infrared Survey for $H_2$) is an unbiased, narrow-band imaging survey of the Galactic plane in the $H_2$ 1-0 S(1) emission line at $2.122{\mu}m$ using the Wide-Field Camera (WFCAM) at the United Kingdom Infrared Telescope (UKIRT). The survey covers about 150 square degrees of the first Galactic quadrant ($10^{\circ}$ < l < $65^{\circ}$; $-1.3^{\circ}$ < b < $+1.3^{\circ}$). The images have a $5{\sigma}$ detection limit of point sources of K~18 mag and the surface brightness limit is $10^{-19}\;W\;m^{-2}$ $arcsec^{-2}$. The survey operation began on 28 July 2009 and has completed on 17 August 2011. We have been studying the supernova remnants (SNRs) in the UWISH2 survey area. Among the known 274 Galactic SNRs, the survey area includes 65 SNRs or 24 percent of the known SNRs. The wide-field and high-quality UWISH2 images allow us to identify both the diffuse extended and compact $H_2$ emission associated with SNRs, which is useful for understanding their physical environment and evolution. The continuum is subtracted from the narrow-band $H_2$ images using the K-band continuum images obtained as part of the UKIDSS GPS (UKIRT Infrared Deep Sky Survey of the Galactic Plane). So far, we have inspected 42 SNRs, and found distinct H2 emission in 14 SNRs. The detection rate is 33%. Some of the SNRs show bright, complex, and interesting structures that have never been reported in previous studies. In this report, we present our identification scheme and preliminary results.

  • PDF

Improved Text Recognition using Analysis of Illumination Component in Color Images (컬러 영상의 조명성분 분석을 통한 문자인식 성능 향상)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.131-136
    • /
    • 2007
  • This paper proposes a new approach to eliminate the reflectance component for the detection of text in color images. Color images, printed by color printing technology, normally have an illumination component as well as a reflectance component. It is well known that a reflectance component usually obstructs the task of detecting and recognizing objects like texts in the scene, since it blurs out an overall image. We have developed an approach that efficiently removes reflectance components while preserving illumination components. We decided whether an input image hits Normal or Polarized for determining the light environment, using the histogram which consisted of a red component. We were able to go ahead through the ability to extract by reducing the blur phenomenon of text by light because reflection component by an illumination change and removed it and extracted text. The experimental results have shown a superior performance even when an image has a complex background. Text detection and recognition performance is influenced by changing the illumination condition. Our method is robust to the images with different illumination conditions.

  • PDF