• Title/Summary/Keyword: Semantic feature

Search Result 257, Processing Time 0.02 seconds

Scene Change Detection and Key Frame Selection Using Fast Feature Extraction in the MPEG-Compressed Domain (MPEG 압축 영상에서의 고속 특징 요소 추출을 이용한 장면 전환 검출과 키 프레임 선택)

  • 송병철;김명준;나종범
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.155-163
    • /
    • 1999
  • In this paper, we propose novel scene change detection and key frame selection techniques, which use two feature images, i.e., DC and edge images, extracted directly from MPEG compressed video. For fast edge image extraction. we suggest to utilize 5 lower AC coefficients of each DCT. Based on this scheme, we present another edge image extraction technique using AC prediction. Although the former is superior to the latter in terms of visual quality, both methods all can extract important edge features well. Simulation results indicate that scene changes such as cut. fades, and dissolves can be correctly detected by using the edge energy diagram obtained from edge images and histograms from DC images. In addition. we find that our edge images are comparable to those obtained in the spatial domain while keeping much lower computational cost. And based on HVS, a key frame of each scene can also be selected. In comparison with an existing method using optical flow. our scheme can select semantic key frames because we only use the above edge and DC images.

  • PDF

The design and implementation of Object-based bioimage matching on a Mobile Device (모바일 장치기반의 바이오 객체 이미지 매칭 시스템 설계 및 구현)

  • Park, Chanil;Moon, Seung-jin
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.1-10
    • /
    • 2019
  • Object-based image matching algorithms have been widely used in the image processing and computer vision fields. A variety of applications based on image matching algorithms have been recently developed for object recognition, 3D modeling, video tracking, and biomedical informatics. One prominent example of image matching features is the Scale Invariant Feature Transform (SIFT) scheme. However many applications using the SIFT algorithm have implemented based on stand-alone basis, not client-server architecture. In this paper, We initially implemented based on client-server structure by using SIFT algorithms to identify and match objects in biomedical images to provide useful information to the user based on the recently released Mobile platform. The major methodological contribution of this work is leveraging the convenient user interface and ubiquitous Internet connection on Mobile device for interactive delineation, segmentation, representation, matching and retrieval of biomedical images. With these technologies, our paper showcased examples of performing reliable image matching from different views of an object in the applications of semantic image search for biomedical informatics.

An Efficient Indoor-Outdoor Scene Classification Method (효율적인 실내의 영상 분류 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.48-55
    • /
    • 2009
  • Prior research works in indoor-outdoor classification have been conducted based on a simple combination of low-level features. However, since there are many challenging problems due to the extreme variability of the scene contents, most methods proposed recently tend to combine the low-level features with high-level information such as the presence of trees and sky. To extract these regions from videos, we need to conduct additional tasks, which may yield the increasing number of feature dimensions or computational burden. Therefore, an efficient indoor-outdoor scene classification method is proposed in this paper. First, the video is divided into the five same-sized blocks. Then we define and use the edge and color orientation histogram (ECOH) descriptors to represent each sub-block efficiently. Finally, all ECOH values are simply concatenated to generated the feature vector. To justify the efficiency and robustness of the proposed method, a diverse database of over 1200 videos is evaluated. Moreover, we improve the classification performance by using different weight values determined through the learning process.

A Study on Lightweight Model with Attention Process for Efficient Object Detection (효율적인 객체 검출을 위해 Attention Process를 적용한 경량화 모델에 대한 연구)

  • Park, Chan-Soo;Lee, Sang-Hun;Han, Hyun-Ho
    • Journal of Digital Convergence
    • /
    • v.19 no.5
    • /
    • pp.307-313
    • /
    • 2021
  • In this paper, a lightweight network with fewer parameters compared to the existing object detection method is proposed. In the case of the currently used detection model, the network complexity has been greatly increased to improve accuracy. Therefore, the proposed network uses EfficientNet as a feature extraction network, and the subsequent layers are formed in a pyramid structure to utilize low-level detailed features and high-level semantic features. An attention process was applied between pyramid structures to suppress unnecessary noise for prediction. All computational processes of the network are replaced by depth-wise and point-wise convolutions to minimize the amount of computation. The proposed network was trained and evaluated using the PASCAL VOC dataset. The features fused through the experiment showed robust properties for various objects through a refinement process. Compared with the CNN-based detection model, detection accuracy is improved with a small amount of computation. It is considered necessary to adjust the anchor ratio according to the size of the object as a future study.

Deep learning-based post-disaster building inspection with channel-wise attention and semi-supervised learning

  • Wen Tang;Tarutal Ghosh Mondal;Rih-Teng Wu;Abhishek Subedi;Mohammad R. Jahanshahi
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.365-381
    • /
    • 2023
  • The existing vision-based techniques for inspection and condition assessment of civil infrastructure are mostly manual and consequently time-consuming, expensive, subjective, and risky. As a viable alternative, researchers in the past resorted to deep learning-based autonomous damage detection algorithms for expedited post-disaster reconnaissance of structures. Although a number of automatic damage detection algorithms have been proposed, the scarcity of labeled training data remains a major concern. To address this issue, this study proposed a semi-supervised learning (SSL) framework based on consistency regularization and cross-supervision. Image data from post-earthquake reconnaissance, that contains cracks, spalling, and exposed rebars are used to evaluate the proposed solution. Experiments are carried out under different data partition protocols, and it is shown that the proposed SSL method can make use of unlabeled images to enhance the segmentation performance when limited amount of ground truth labels are provided. This study also proposes DeepLab-AASPP and modified versions of U-Net++ based on channel-wise attention mechanism to better segment the components and damage areas from images of reinforced concrete buildings. The channel-wise attention mechanism can effectively improve the performance of the network by dynamically scaling the feature maps so that the networks can focus on more informative feature maps in the concatenation layer. The proposed DeepLab-AASPP achieves the best performance on component segmentation and damage state segmentation tasks with mIoU scores of 0.9850 and 0.7032, respectively. For crack, spalling, and rebar segmentation tasks, modified U-Net++ obtains the best performance with Igou scores (excluding the background pixels) of 0.5449, 0.9375, and 0.5018, respectively. The proposed architectures win the second place in IC-SHM2021 competition in all five tasks of Project 2.

Large Language Models-based Feature Extraction for Short-Term Load Forecasting (거대언어모델 기반 특징 추출을 이용한 단기 전력 수요량 예측 기법)

  • Jaeseung Lee;Jehyeok Rew
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.51-65
    • /
    • 2024
  • Accurate electrical load forecasting is important to the effective operation of power systems in smart grids. With the recent development in machine learning, artificial intelligence-based models for predicting power demand are being actively researched. However, since existing models get input variables as numerical features, the accuracy of the forecasting model may decrease because they do not reflect the semantic relationship between these features. In this paper, we propose a scheme for short-term load forecasting by using features extracted through the large language models for input data. We firstly convert input variables into a sentence-like prompt format. Then, we use the large language model with frozen weights to derive the embedding vectors that represent the features of the prompt. These vectors are used to train the forecasting model. Experimental results show that the proposed scheme outperformed models based on numerical data, and by visualizing the attention weights in the large language models on the prompts, we identified the information that significantly influences predictions.

Hierarchical Overlapping Clustering to Detect Complex Concepts (중복을 허용한 계층적 클러스터링에 의한 복합 개념 탐지 방법)

  • Hong, Su-Jeong;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.111-125
    • /
    • 2011
  • Clustering is a process of grouping similar or relevant documents into a cluster and assigning a meaningful concept to the cluster. By this process, clustering facilitates fast and correct search for the relevant documents by narrowing down the range of searching only to the collection of documents belonging to related clusters. For effective clustering, techniques are required for identifying similar documents and grouping them into a cluster, and discovering a concept that is most relevant to the cluster. One of the problems often appearing in this context is the detection of a complex concept that overlaps with several simple concepts at the same hierarchical level. Previous clustering methods were unable to identify and represent a complex concept that belongs to several different clusters at the same level in the concept hierarchy, and also could not validate the semantic hierarchical relationship between a complex concept and each of simple concepts. In order to solve these problems, this paper proposes a new clustering method that identifies and represents complex concepts efficiently. We developed the Hierarchical Overlapping Clustering (HOC) algorithm that modified the traditional Agglomerative Hierarchical Clustering algorithm to allow overlapped clusters at the same level in the concept hierarchy. The HOC algorithm represents the clustering result not by a tree but by a lattice to detect complex concepts. We developed a system that employs the HOC algorithm to carry out the goal of complex concept detection. This system operates in three phases; 1) the preprocessing of documents, 2) the clustering using the HOC algorithm, and 3) the validation of semantic hierarchical relationships among the concepts in the lattice obtained as a result of clustering. The preprocessing phase represents the documents as x-y coordinate values in a 2-dimensional space by considering the weights of terms appearing in the documents. First, it goes through some refinement process by applying stopwords removal and stemming to extract index terms. Then, each index term is assigned a TF-IDF weight value and the x-y coordinate value for each document is determined by combining the TF-IDF values of the terms in it. The clustering phase uses the HOC algorithm in which the similarity between the documents is calculated by applying the Euclidean distance method. Initially, a cluster is generated for each document by grouping those documents that are closest to it. Then, the distance between any two clusters is measured, grouping the closest clusters as a new cluster. This process is repeated until the root cluster is generated. In the validation phase, the feature selection method is applied to validate the appropriateness of the cluster concepts built by the HOC algorithm to see if they have meaningful hierarchical relationships. Feature selection is a method of extracting key features from a document by identifying and assigning weight values to important and representative terms in the document. In order to correctly select key features, a method is needed to determine how each term contributes to the class of the document. Among several methods achieving this goal, this paper adopted the $x^2$�� statistics, which measures the dependency degree of a term t to a class c, and represents the relationship between t and c by a numerical value. To demonstrate the effectiveness of the HOC algorithm, a series of performance evaluation is carried out by using a well-known Reuter-21578 news collection. The result of performance evaluation showed that the HOC algorithm greatly contributes to detecting and producing complex concepts by generating the concept hierarchy in a lattice structure.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Reliability measure improvement of Phoneme character extract In Out-of-Vocabulary Rejection Algorithm (미등록어 거절 알고리즘에서 음소 특성 추출의 신뢰도 측정 개선)

  • Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.10 no.6
    • /
    • pp.219-224
    • /
    • 2012
  • In the communication mobile terminal, Vocabulary recognition system has low recognition rates, because this problems are due to phoneme feature extract from inaccurate vocabulary. Therefore they are not recognize the phoneme and similar phoneme misunderstanding error. To solve this problem, this paper propose the system model, which based on the two step process. First, input phoneme is represent by number which measure the distance of phonemes through phoneme likelihood process. next step is recognize the result through the reliability measure. By this process, we minimize the phoneme misunderstanding error caused by inaccurate vocabulary and perform error correction rate for error provrd vocabulary using phoneme likelihood and reliability. System performance comparison as a result of recognition improve represent 2.7% by method using error pattern learning and semantic pattern.

Deep Window Detection in Street Scenes

  • Ma, Wenguang;Ma, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.855-870
    • /
    • 2020
  • Windows are key components of building facades. Detecting windows, crucial to 3D semantic reconstruction and scene parsing, is a challenging task in computer vision. Early methods try to solve window detection by using hand-crafted features and traditional classifiers. However, these methods are unable to handle the diversity of window instances in real scenes and suffer from heavy computational costs. Recently, convolutional neural networks based object detection algorithms attract much attention due to their good performances. Unfortunately, directly training them for challenging window detection cannot achieve satisfying results. In this paper, we propose an approach for window detection. It involves an improved Faster R-CNN architecture for window detection, featuring in a window region proposal network, an RoI feature fusion and a context enhancement module. Besides, a post optimization process is designed by the regular distribution of windows to refine detection results obtained by the improved deep architecture. Furthermore, we present a newly collected dataset which is the largest one for window detection in real street scenes to date. Experimental results on both existing datasets and the new dataset show that the proposed method has outstanding performance.