• Title/Summary/Keyword: images of scientists

Search Result 503, Processing Time 0.027 seconds

Supervised Classification Systems for High Resolution Satellite Images (고해상도 위성영상을 위한 감독분류 시스템)

  • 전영준;김진일
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.301-310
    • /
    • 2003
  • In this paper, we design and Implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the m()st effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

A Shape Feature Extraction Method for Topographical Image Databases (지형/지물 이미지 데이타베이스를 위한 형태 특징 추출 방법)

  • Kwon Yong-Il;Park Ho-Hyun;Lee Seok-Lyong;Chung Chin-Wan
    • Journal of KIISE:Databases
    • /
    • v.33 no.4
    • /
    • pp.384-395
    • /
    • 2006
  • Topographical images such as aerial and satellite images are usually similar with respect to colors and textures but not in shapes. Thus shape features of the images and the methods of extracting them become critical for effective image retrieval from topographical image databases. In this paper, we propose a shape feature extraction method for topographical image retrieval. The method extracts a set of attributes which can model the presence of holes and disconnected regions in images and is tolerant to pre-processing, more specifically segmentation, errors. Various experiments suggest that retrieval using attributes extracted using the proposed method performs better than using existing shape feature extraction methods.

Real-Time Panorama Video Generation System using Multiple Networked Cameras (다중 네트워크 카메라 기반 실시간 파노라마 동영상 생성 시스템)

  • Choi, KyungYoon;Jun, KyungKoo
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.990-997
    • /
    • 2015
  • Panoramic image creation has been extensively studied. Existing methods use customized hardware, or apply post-processing methods to seamlessly stitch images. These result in an increase in either cost or complexity. In addition, images can only be stitched under certain conditions such as existence of characteristic points of the images. This paper proposes a low cost and easy-to-use system that produces realtime panoramic video. We use an off-the-shelf embedded platform to capture multiple images, and these are then transmitted to a server in a compressed format to be merged into a single panoramic video. Finally, we analyze the performance of the implemented system by measuring time to successfully create the panoramic image.

Fast and Accurate Rigid Registration of 3D CT Images by Combining Feature and Intensity

  • June, Naw Chit Too;Cui, Xuenan;Li, Shengzhe;Kim, Hak-Il;Kwack, Kyu-Sung
    • Journal of Computing Science and Engineering
    • /
    • v.6 no.1
    • /
    • pp.1-11
    • /
    • 2012
  • Computed tomography (CT) images are widely used for the analysis of the temporal evaluation or monitoring of the progression of a disease. The follow-up examinations of CT scan images of the same patient require a 3D registration technique. In this paper, an automatic and robust registration is proposed for the rigid registration of 3D CT images. The proposed method involves two steps. Firstly, the two CT volumes are aligned based on their principal axes, and then, the alignment from the previous step is refined by the optimization of the similarity score of the image's voxel. Normalized cross correlation (NCC) is used as a similarity metric and a downhill simplex method is employed to find out the optimal score. The performance of the algorithm is evaluated on phantom images and knee synthetic CT images. By the extraction of the initial transformation parameters with principal axis of the binary volumes, the searching space to find out the parameters is reduced in the optimization step. Thus, the overall registration time is algorithmically decreased without the deterioration of the accuracy. The preliminary experimental results of the study demonstrate that the proposed method can be applied to rigid registration problems of real patient images.

Classification of Brain Magnetic Resonance Images using 2 Level Decision Tree Learning (2 단계 결정트리 학습을 이용한 뇌 자기공명영상 분류)

  • Kim, Hyung-Il;Kim, Yong-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.18-29
    • /
    • 2007
  • In this paper we present a system that classifies brain MR images by using 2 level decision tree learning. There are two kinds of information that can be obtained from images. One is the low-level features such as size, color, texture, and contour that can be acquired directly from the raw images, and the other is the high-level features such as existence of certain object, spatial relations between different parts that must be obtained through the interpretation of segmented images. Learning and classification should be performed based on the high-level features to classify images according to their semantic meaning. The proposed system applies decision tree learning to each level separately, and the high-level features are synthesized from the results of low-level classification. The experimental results with a set of brain MR images with tumor are discussed. Several experimental results that show the effectiveness of the proposed system are also presented.

Collaborative Local Active Appearance Models for Illuminated Face Images (조명얼굴 영상을 위한 협력적 지역 능동표현 모델)

  • Yang, Jun-Young;Ko, Jae-Pil;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.816-824
    • /
    • 2009
  • In the face space, face images due to illumination and pose variations have a nonlinear distribution. Active Appearance Models (AAM) based on the linear model have limits to the nonlinear distribution of face images. In this paper, we assume that a few clusters of face images are given; we build local AAMs according to the clusters of face images, and then select a proper AAM model during the fitting phase. To solve the problem of updating fitting parameters among the models due to the model changing, we propose to build in advance relationships among the clusters in the parameter space from the training images. In addition, we suggest a gradual model changing to reduce improper model selections due to serious fitting failures. In our experiment, we apply the proposed model to Yale Face Database B and compare it with the previous method. The proposed method demonstrated successful fitting results with strongly illuminated face images of deep shadows.

A Categorization Scheme of Tag-based Folksonomy Images for Efficient Image Retrieval (효과적인 이미지 검색을 위한 태그 기반의 폭소노미 이미지 카테고리화 기법)

  • Ha, Eunji;Kim, Yongsung;Hwang, Eenjun
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.6
    • /
    • pp.290-295
    • /
    • 2016
  • Recently, folksonomy-based image-sharing sites where users cooperatively make and utilize tags of image annotation have been gaining popularity. Typically, these sites retrieve images for a user request using simple text-based matching and display retrieved images in the form of photo stream. However, these tags are personal and subjective and images are not categorized, which results in poor retrieval accuracy and low user satisfaction. In this paper, we propose a categorization scheme for folksonomy images which can improve the retrieval accuracy in the tag-based image retrieval systems. Consequently, images are classified by the semantic similarity using text-information and image-information generated on the folksonomy. To evaluate the performance of our proposed scheme, we collect folksonomy images and categorize them using text features and image features. And then, we compare its retrieval accuracy with that of existing systems.

Ocean Color Monitoring of Coastal Environments in the Asian Waters

  • Tang, Danling;Kawamura, Hiroshi
    • Journal of the korean society of oceanography
    • /
    • v.37 no.3
    • /
    • pp.154-159
    • /
    • 2002
  • Satellite remote sensing technology for ocean observation has evolved considerably in these last twenty years. Ocean color is one of the most important parameters of ocean satellite measurements. This paper describes a remote sensing of ocean color data project - Asian I-Lac Project; it also introduces several case studies using satellite images in the Asian waters. The Asian waters are related to about 30 Asian countries, representing about 60% of the world population. The project aims at generating long-term time series images (planned for 10 years from 1996 to 2006) by combining several ocean color satellite data, i.e., ADEOS-I OCTS and SeaWiFS, and some other sensors. Some typical parameters that could be measured include Chlorophyll- a (Chl-a), Colored Dissolved Organic Matter (CDOM), and Suspended Material (SSM). Reprocessed OCTS images display spatial variation of Chl-a, CDOM, and SSM in the Asian waters; a short term variability of phytoplankton blooms was observed in the Gulf of Oman in November 1996 by analyzing OCTS and NOAA sea surface temperature (SST); Chl-a concentrations derived from OCTS and SeaWiFS have also been evaluated in coastal areas of the Taiwan Strait, the Gulf of Thailand, the northeast Arabian Sea, and the Japan Sea. The data system provides scientists with capability of testing or developing ocean color algorithms, and transferring images for their research. We have also analyzed availability of OCTS images. The results demonstrate the potential of long-term time series of satellite ocean color data for research in marine biology, and ocean studies. The case studies show multiple applications of satellite images on monitoring of coastal environments in the Asian Waters.

Development and Architecture of Video-to-Images to Enhance User Experience for Video Content Consumption (동영상 콘텐트 소비의 사용자 경험 향상을 위한 V2I(Video to Images) 기술 및 그 구조)

  • Jeon, Kyuyeong;Yang, Jinhong;Kim, Yongrok;Park, Hyojin;Jung, Sungkwan
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.7
    • /
    • pp.326-331
    • /
    • 2016
  • The proportion of video content consumption is growing dramatically but some users avoid it. The reasons are initial time to load, lack of the time to watch video content, and particularly a traffic issues on mobile devices. The proposed Video-to-Images(V2I) technology offers a new user experience to end users through converting video into images without content providers' or users' effort. Using the V2I technology, consumption methods of video content with new type of content by users and the advantages of the new user experience will be introduced. Furthermore, the overall architecture of the V2I will be explained.

Automatic Tagging for Social Images using Convolution Neural Networks (CNN을 이용한 소셜 이미지 자동 태깅)

  • Jang, Hyunwoong;Cho, Soosun
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.47-53
    • /
    • 2016
  • While the Internet develops rapidly, a huge amount of image data collected from smart phones, digital cameras and black boxes are being shared through social media sites. Generally, social images are handled by tagging them with information. Due to the ease of sharing multimedia and the explosive increase in the amount of tag information, it may be considered too much hassle by some users to put the tags on images. Image retrieval is likely to be less accurate when tags are absent or mislabeled. In this paper, we suggest a method of extracting tags from social images by using image content. In this method, CNN(Convolutional Neural Network) is trained using ImageNet images with labels in the training set, and it extracts labels from instagram images. We use the extracted labels for automatic image tagging. The experimental results show that the accuracy is higher than that of instagram retrievals.