• 제목/요약/키워드: Image features

검색결과 3,394건 처리시간 0.033초

Performance Improvement of Classifier by Combining Disjunctive Normal Form features

  • Min, Hyeon-Gyu;Kang, Dong-Joong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제10권4호
    • /
    • pp.50-64
    • /
    • 2018
  • This paper describes a visual object detection approach utilizing ensemble based machine learning. Object detection methods employing 1D features have the benefit of fast calculation speed. However, for real image with complex background, detection accuracy and performance are degraded. In this paper, we propose an ensemble learning algorithm that combines a 1D feature classifier and 2D DNF (Disjunctive Normal Form) classifier to improve the object detection performance in a single input image. Also, to improve the computing efficiency and accuracy, we propose a feature selecting method to reduce the computing time and ensemble algorithm by combining the 1D features and 2D DNF features. In the verification experiments, we selected the Haar-like feature as the 1D image descriptor, and demonstrated the performance of the algorithm on a few datasets such as face and vehicle.

Rough 집합 이론을 이용한 원격 탐사 다중 분광 이미지 데이터의 특징 추출 (Features Extraction of Remote Sensed Multispectral Image Data Using Rough Sets Theory)

  • 원성현;정환묵
    • 한국지능시스템학회논문지
    • /
    • 제8권3호
    • /
    • pp.16-25
    • /
    • 1998
  • 본 논문에서는 초 다중 밴드 환경의 효과적인 데이터 분류를 위해서 Roungh 집합 이론을 이용한 특징 추출 방법을 제안한다. 다중 분광 이미지 데이터의 특성을 분석하고, 그 분석 결과를 토대로 Rough집합이론의 식별 능력을 이용하여 가장 효과적인 밴드를 선택할 수 있도록 한다. 실험으로는 Landsat TM으로부터 취득한 데이터에 적용시켰으며, 이를 통해 전통적인 밴드 특성에 의한 밴드 선택 방법과 본 논문에서 제안하는 러프 집합 이론을 이용한 밴드 선택 방법이 일치됨을 보이고 이를 통해 초다중 밴드 환경에서의 특징 추출에 대한 이론적 근거를 제시한다.

  • PDF

DCT 특징을 이용한 지표면 분류 기법 (A Method for Terrain Cover Classification Using DCT Features)

  • 이승연;곽동민;성기열
    • 한국군사과학기술학회지
    • /
    • 제13권4호
    • /
    • pp.683-688
    • /
    • 2010
  • The ability to navigate autonomously in off-road terrain is the most critical technology needed for Unmanned Ground Vehicles(UGV). In this paper, we present a method for vision-based terrain cover classification using DCT features. To classify the terrain, we acquire image from a CCD sensor, then the image is divided into fixed size of blocks. And each block transformed into DCT image then extracts features which reflect frequency band characteristics. Neural network classifier is used to classify the features. The proposed method is validated and verified through many experiments and we compare it with wavelet feature based method. The results show that the proposed method is more efficiently classify the terrain-cover than wavelet feature based one.

Cost Effective Image Classification Using Distributions of Multiple Features

  • Sivasankaravel, Vanitha Sivagami
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권7호
    • /
    • pp.2154-2168
    • /
    • 2022
  • Our work addresses the issues associated with usage of the semantic features by Bag of Words model, which requires construction of the dictionary. Extracting the relevant features and clustering them into code book or dictionary is computationally intensive and requires large storage area. Hence we propose to use a simple distribution of multiple shape based features, which is a mixture of gradients, radius and slope angles requiring very less computational cost and storage requirements but can serve as an equivalent image representative. The experimental work conducted on PASCAL VOC 2007 dataset exhibits marginally closer performance in terms of accuracy with the Bag of Word model using Self Organizing Map for clustering and very significant computational gain.

Recent Advances in Feature Detectors and Descriptors: A Survey

  • Lee, Haeseong;Jeon, Semi;Yoon, Inhye;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제5권3호
    • /
    • pp.153-163
    • /
    • 2016
  • Local feature extraction methods for images and videos are widely applied in the fields of image understanding and computer vision. However, robust features are detected differently when using the latest feature detectors and descriptors because of diverse image environments. This paper analyzes various feature extraction methods by summarizing algorithms, specifying properties, and comparing performance. We analyze eight feature extraction methods. The performance of feature extraction in various image environments is compared and evaluated. As a result, the feature detectors and descriptors can be used adaptively for image sequences captured under various image environments. Also, the evaluation of feature detectors and descriptors can be applied to driving assistance systems, closed circuit televisions (CCTVs), robot vision, etc.

Adaptive Importance Channel Selection for Perceptual Image Compression

  • He, Yifan;Li, Feng;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권9호
    • /
    • pp.3823-3840
    • /
    • 2020
  • Recently, auto-encoder has emerged as the most popular method in convolutional neural network (CNN) based image compression and has achieved impressive performance. In the traditional auto-encoder based image compression model, the encoder simply sends the features of last layer to the decoder, which cannot allocate bits over different spatial regions in an efficient way. Besides, these methods do not fully exploit the contextual information under different receptive fields for better reconstruction performance. In this paper, to solve these issues, a novel auto-encoder model is designed for image compression, which can effectively transmit the hierarchical features of the encoder to the decoder. Specifically, we first propose an adaptive bit-allocation strategy, which can adaptively select an importance channel. Then, we conduct the multiply operation on the generated importance mask and the features of the last layer in our proposed encoder to achieve efficient bit allocation. Moreover, we present an additional novel perceptual loss function for more accurate image details. Extensive experiments demonstrated that the proposed model can achieve significant superiority compared with JPEG and JPEG2000 both in both subjective and objective quality. Besides, our model shows better performance than the state-of-the-art convolutional neural network (CNN)-based image compression methods in terms of PSNR.

A Novel System for Detecting Adult Images on the Internet

  • Park, Jae-Yong;Park, Sang-Sung;Shin, Young-Geun;Jang, Dong-Sik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제4권5호
    • /
    • pp.910-924
    • /
    • 2010
  • As Internet usage has increased, the risk of adolescents being exposed to adult content and harmful information on the Internet has also risen. To help prevent adolescents accessing this content, a novel detection method for adult images is proposed. The proposed method involves three steps. First, the Image Of Interest (IOI) is extracted from the image background. Second, the IOI is distinguished from the segmented image using a novel weighting mask, and it is determined to be acceptable or unacceptable. Finally, the features (color and texture) of the IOI or original image are compared to a critical value; if they exceed that value then the image is deemed to be an adult image. A Receiver Operating Characteristic (ROC) curve analysis was performed to define this optimal critical value. And, the textural features are identified using a gray level co-occurrence matrix. The proposed method increased the precision level of detection by applying a novel weighting mask and a receiver operating characteristic curve. To demonstrate the effectiveness of the proposed method, 2850 adult and non-adult images were used for experimentation.

TSDnet: 적외선과 가시광선 이미지 융합을 위한 규모-3 밀도망 (TSDnet: Three-scale Dense Network for Infrared and Visible Image Fusion)

  • 장영매;이효종
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.656-658
    • /
    • 2022
  • The purpose of infrared and visible image fusion is to integrate images of different modes with different details into a result image with rich information, which is convenient for high-level computer vision task. Considering many deep networks only work in a single scale, this paper proposes a novel image fusion based on three-scale dense network to preserve the content and key target features from the input images in the fused image. It comprises an encoder, a three-scale block, a fused strategy and a decoder, which can capture incredibly rich background details and prominent target details. The encoder is used to extract three-scale dense features from the source images for the initial image fusion. Then, a fusion strategy called l1-norm to fuse features of different scales. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in subjective observation.

Pixel-Wise Polynomial Estimation Model for Low-Light Image Enhancement

  • Muhammad Tahir Rasheed;Daming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권9호
    • /
    • pp.2483-2504
    • /
    • 2023
  • Most existing low-light enhancement algorithms either use a large number of training parameters or lack generalization to real-world scenarios. This paper presents a novel lightweight and robust pixel-wise polynomial approximation-based deep network for low-light image enhancement. For mapping the low-light image to the enhanced image, pixel-wise higher-order polynomials are employed. A deep convolution network is used to estimate the coefficients of these higher-order polynomials. The proposed network uses multiple branches to estimate pixel values based on different receptive fields. With a smaller receptive field, the first branch enhanced local features, the second and third branches focused on medium-level features, and the last branch enhanced global features. The low-light image is downsampled by the factor of 2b-1 (b is the branch number) and fed as input to each branch. After combining the outputs of each branch, the final enhanced image is obtained. A comprehensive evaluation of our proposed network on six publicly available no-reference test datasets shows that it outperforms state-of-the-art methods on both quantitative and qualitative measures.

이미지의 깊이 추정을 위한 유전 알고리즘 기반의 특징 축소 (Genetic Algorithm Based Feature Reduction For Depth Estimation Of Image)

  • 신성식;권오봉
    • 전자공학회논문지CI
    • /
    • 제48권2호
    • /
    • pp.47-54
    • /
    • 2011
  • 본 논문에서는 한 장의 이미지에서 학습을 통하여 영역 별 깊이 정보를 추정할 때 사용되는 특징 정보를 유전 알고리즘(Genetic Algorithm)을 기반으로 축소하고 깊이 정보 추정 시간을 단축하는 방법에 대해서 기술 한다. 깊이 정보는 이미지의 에너지 값과 텍스쳐의 기울기 등을 특징으로 생성하여 특징들의 관계를 기반으로 추정 된다. 이 때 사용되는 특징의 차원이 크기 때문에 연산시간이 증가하고 특징의 중요성을 판단하지 않고 사용하여 오히려 성능에 나쁜 영향을 미치기도 한다. 이에 따라 중요성을 판단하여 특징의 차원을 줄일 필요가 있다. 본 논문에서 제안한 방법을 미국 스탠포드(Stanford)대학에서 제공하는 벤치마크 데이터로 실험한 결과, 특징의 추출과 깊이 추정 연산 시간이 모든 특징을 사용하는 방법에 비하여 약 60%정도 향상되고 정확도가 평균 0.4%에서 최대 2.5% 향상 되었다.