• Title/Summary/Keyword: Feature representation

Search Result 422, Processing Time 0.028 seconds

Computation of 3D Coordinates from Stereo Images with RPCs (RPC를 이용한 Stereo 영상으로부터의 3차원 좌표 추출)

  • Kim Kwang-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.2
    • /
    • pp.135-143
    • /
    • 2005
  • RPC(Rational Polynomial Camera) models have become the replacement model of choice for a number of high resolution satellite imagery providers. RPCs(Rational Polynomial Coefficients) provide a compact accurate representation of the ground to image geometry, allowing users to perform full photogrammetric processing of satellite imagery including block adjustment, 3D feature extraction and orthorectification. This paper presents an algorithm for 3D feature extraction using downhill simpler method which requires only function evaluations, not derivatives. The algorithm was implemented as an executable software program and tested using stereo IKONOS images of Seoul city. The results showed that the proposed algorithm was fast and accurate enough to be used as a practical method for the 3D feature extraction from stereo images with RPCs.

Novel Intent based Dimension Reduction and Visual Features Semi-Supervised Learning for Automatic Visual Media Retrieval

  • kunisetti, Subramanyam;Ravichandran, Suban
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.230-240
    • /
    • 2022
  • Sharing of online videos via internet is an emerging and important concept in different types of applications like surveillance and video mobile search in different web related applications. So there is need to manage personalized web video retrieval system necessary to explore relevant videos and it helps to peoples who are searching for efficient video relates to specific big data content. To evaluate this process, attributes/features with reduction of dimensionality are computed from videos to explore discriminative aspects of scene in video based on shape, histogram, and texture, annotation of object, co-ordination, color and contour data. Dimensionality reduction is mainly depends on extraction of feature and selection of feature in multi labeled data retrieval from multimedia related data. Many of the researchers are implemented different techniques/approaches to reduce dimensionality based on visual features of video data. But all the techniques have disadvantages and advantages in reduction of dimensionality with advanced features in video retrieval. In this research, we present a Novel Intent based Dimension Reduction Semi-Supervised Learning Approach (NIDRSLA) that examine the reduction of dimensionality with explore exact and fast video retrieval based on different visual features. For dimensionality reduction, NIDRSLA learns the matrix of projection by increasing the dependence between enlarged data and projected space features. Proposed approach also addressed the aforementioned issue (i.e. Segmentation of video with frame selection using low level features and high level features) with efficient object annotation for video representation. Experiments performed on synthetic data set, it demonstrate the efficiency of proposed approach with traditional state-of-the-art video retrieval methodologies.

Cell Images Classification using Deep Convolutional Autoencoder of Unsupervised Learning (비지도학습의 딥 컨벌루셔널 자동 인코더를 이용한 셀 이미지 분류)

  • Vununu, Caleb;Park, Jin-Hyeok;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.942-943
    • /
    • 2021
  • The present work proposes a classification system for the HEp-2 cell images using an unsupervised deep feature learning method. Unlike most of the state-of-the-art methods in the literature that utilize deep learning in a strictly supervised way, we propose here the use of the deep convolutional autoencoder (DCAE) as the principal feature extractor for classifying the different types of the HEp-2 cell images. The network takes the original cell images as the inputs and learns to reconstruct them in order to capture the features related to the global shape of the cells. A final feature vector is constructed by using the latent representations extracted from the DCAE, giving a highly discriminative feature representation. The created features will be fed to a nonlinear classifier whose output will represent the final type of the cell image. We have tested the discriminability of the proposed features on one of the most popular HEp-2 cell classification datasets, the SNPHEp-2 dataset and the results show that the proposed features manage to capture the distinctive characteristics of the different cell types while performing at least as well as the actual deep learning based state-of-the-art methods.

Expansion of Word Representation for Named Entity Recognition Based on Bidirectional LSTM CRFs (Bidirectional LSTM CRF 기반의 개체명 인식을 위한 단어 표상의 확장)

  • Yu, Hongyeon;Ko, Youngjoong
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.306-313
    • /
    • 2017
  • Named entity recognition (NER) seeks to locate and classify named entities in text into pre-defined categories such as names of persons, organizations, locations, expressions of times, etc. Recently, many state-of-the-art NER systems have been implemented with bidirectional LSTM CRFs. Deep learning models based on long short-term memory (LSTM) generally depend on word representations as input. In this paper, we propose an approach to expand word representation by using pre-trained word embedding, part of speech (POS) tag embedding, syllable embedding and named entity dictionary feature vectors. Our experiments show that the proposed approach creates useful word representations as an input of bidirectional LSTM CRFs. Our final presentation shows its efficacy to be 8.05%p higher than baseline NERs with only the pre-trained word embedding vector.

Data anomaly detection for structural health monitoring of bridges using shapelet transform

  • Arul, Monica;Kareem, Ahsan
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.93-103
    • /
    • 2022
  • With the wider availability of sensor technology through easily affordable sensor devices, several Structural Health Monitoring (SHM) systems are deployed to monitor vital civil infrastructure. The continuous monitoring provides valuable information about the health of the structure that can help provide a decision support system for retrofits and other structural modifications. However, when the sensors are exposed to harsh environmental conditions, the data measured by the SHM systems tend to be affected by multiple anomalies caused by faulty or broken sensors. Given a deluge of high-dimensional data collected continuously over time, research into using machine learning methods to detect anomalies are a topic of great interest to the SHM community. This paper contributes to this effort by proposing a relatively new time series representation named "Shapelet Transform" in combination with a Random Forest classifier to autonomously identify anomalies in SHM data. The shapelet transform is a unique time series representation based solely on the shape of the time series data. Considering the individual characteristics unique to every anomaly, the application of this transform yields a new shape-based feature representation that can be combined with any standard machine learning algorithm to detect anomalous data with no manual intervention. For the present study, the anomaly detection framework consists of three steps: identifying unique shapes from anomalous data, using these shapes to transform the SHM data into a local-shape space and training machine learning algorithms on this transformed data to identify anomalies. The efficacy of this method is demonstrated by the identification of anomalies in acceleration data from an SHM system installed on a long-span bridge in China. The results show that multiple data anomalies in SHM data can be automatically detected with high accuracy using the proposed method.

English Phoneme Recognition using Segmental-Feature HMM (분절 특징 HMM을 이용한 영어 음소 인식)

  • Yun, Young-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.167-179
    • /
    • 2002
  • In this paper, we propose a new acoustic model for characterizing segmental features and an algorithm based upon a general framework of hidden Markov models (HMMs) in order to compensate the weakness of HMM assumptions. The segmental features are represented as a trajectory of observed vector sequences by a polynomial regression function because the single frame feature cannot represent the temporal dynamics of speech signals effectively. To apply the segmental features to pattern classification, we adopted segmental HMM(SHMM) which is known as the effective method to represent the trend of speech signals. SHMM separates observation probability of the given state into extra- and intra-segmental variations that show the long-term and short-term variabilities, respectively. To consider the segmental characteristics in acoustic model, we present segmental-feature HMM(SFHMM) by modifying the SHMM. The SFHMM therefore represents the external- and internal-variation as the observation probability of the trajectory in a given state and trajectory estimation error for the given segment, respectively. We conducted several experiments on the TIMIT database to establish the effectiveness of the proposed method and the characteristics of the segmental features. From the experimental results, we conclude that the proposed method is valuable, if its number of parameters is greater than that of conventional HMM, in the flexible and informative feature representation and the performance improvement.

Efficient Data Representation of Stereo Images Using Edge-based Mesh Optimization (윤곽선 기반 메쉬 최적화를 이용한 효율적인 스테레오 영상 데이터 표현)

  • Park, Il-Kwon;Byun, Hye-Ran
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.322-331
    • /
    • 2009
  • This paper proposes an efficient data representation of stereo images using edge-based mesh optimization. Mash-based two dimensional warping for stereo images mainly depends on the performance of a node selection and a disparity estimation of selected nodes. Therefore, the proposed method first of all constructs the feature map which consists of both strong edges and boundary lines of objects for node selection and then generates a grid-based mesh structure using initial nodes. The displacement of each nodal position is iteratively estimated by minimizing the predicted errors between target image and predicted image after two dimensional warping for local area. Generally, iterative two dimensional warping for optimized nodal position required a high time complexity. To overcome this problem, we assume that input stereo images are only horizontal disparity and that optimal nodal position is located on the edge include object boundary lines. Therefore, proposed iterative warping method performs searching process to find optimal nodal position only on edge lines along the horizontal lines. In the experiments, we compare our proposed method with the other mesh-based methods with respect to the quality by using Peak Signal to Noise Ratio (PSNR) according to the number of nodes. Furthermore, computational complexity for an optimal mesh generation is also estimated. Therefore, we have the results that our proposed method provides an efficient stereo image representation not only fast optimal mesh generation but also decreasing of quality deterioration in spite of a small number of nodes through our experiments.

Automatic Sputum Color Image Segmentation for Lung Cancer Diagnosis

  • Taher, Fatma;Werghi, Naoufel;Al-Ahmad, Hussain
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.68-80
    • /
    • 2013
  • Lung cancer is considered to be the leading cause of cancer death worldwide. A technique commonly used consists of analyzing sputum images for detecting lung cancer cells. However, the analysis of sputum is time consuming and requires highly trained personnel to avoid errors. The manual screening of sputum samples has to be improved by using image processing techniques. In this paper we present a Computer Aided Diagnosis (CAD) system for early detection and diagnosis of lung cancer based on the analysis of the sputum color image with the aim to attain a high accuracy rate and to reduce the time consumed to analyze such sputum samples. In order to form general diagnostic rules, we present a framework for segmentation and extraction of sputum cells in sputum images using respectively, a Bayesian classification method followed by region detection and feature extraction techniques to determine the shape of the nuclei inside the sputum cells. The final results will be used for a (CAD) system for early detection of lung cancer. We analyzed the performance of a Bayesian classification with respect to the color space representation and quantification. Our methods were validated via a series of experimentation conducted with a data set of 100 images. Our evaluation criteria were based on sensitivity, specificity and accuracy.

An Approach for the Cross Modality Content-Based Image Retrieval between Different Image Modalities

  • Jeong, Inseong;Kim, Gihong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_2
    • /
    • pp.585-592
    • /
    • 2013
  • CBIR is an effective tool to search and extract image contents in a large remote sensing image database queried by an operator or end user. However, as imaging principles are different by sensors, their visual representation thus varies among image modality type. Considering images of various modalities archived in the database, image modality difference has to be tackled for the successful CBIR implementation. However, this topic has been seldom dealt with and thus still poses a practical challenge. This study suggests a cross modality CBIR (termed as the CM-CBIR) method that transforms given query feature vector by a supervised procedure in order to link between modalities. This procedure leverages the skill of analyst in training steps after which the transformed query vector is created for the use of searching in target images with different modalities. Current initial results show the potential of the proposed CM-CBIR method by delivering the image content of interest from different modality images. Despite its retrieval capability is outperformed by that of same modality CBIR (abbreviated as the SM-CBIR), the lack of retrieval performance can be compensated by employing the user's relevancy feedback, a conventional technique for retrieval enhancement.

Spatio-temporal Semantic Features for Human Action Recognition

  • Liu, Jia;Wang, Xiaonian;Li, Tianyu;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2632-2649
    • /
    • 2012
  • Most approaches to human action recognition is limited due to the use of simple action datasets under controlled environments or focus on excessively localized features without sufficiently exploring the spatio-temporal information. This paper proposed a framework for recognizing realistic human actions. Specifically, a new action representation is proposed based on computing a rich set of descriptors from keypoint trajectories. To obtain efficient and compact representations for actions, we develop a feature fusion method to combine spatial-temporal local motion descriptors by the movement of the camera which is detected by the distribution of spatio-temporal interest points in the clips. A new topic model called Markov Semantic Model is proposed for semantic feature selection which relies on the different kinds of dependencies between words produced by "syntactic " and "semantic" constraints. The informative features are selected collaboratively based on the different types of dependencies between words produced by short range and long range constraints. Building on the nonlinear SVMs, we validate this proposed hierarchical framework on several realistic action datasets.