• 제목/요약/키워드: Feature Acquisition

검색결과 167건 처리시간 0.024초

형태적 특징 정보를 이용한 C.Elegans의 개체 분류 (Classification of C.elegans Behavioral Phenotypes Using Shape Information)

  • 전미라;나원;홍승범;백중환
    • 한국통신학회논문지
    • /
    • 제28권7C호
    • /
    • pp.712-718
    • /
    • 2003
  • C.elegans 선충은 유전자 기능 연구에 주로 쓰이고 있으나, 변종들의 구분이 육안으로는 쉽지 않다. 이를 해결하기 위하여 컴퓨터 비젼을 이용하여 자동으로 분류할 수 있는 시스템이 연구 중이며, 이전 논문[1]에서 선충의 자동 분류 시스템에 사용될 영상의 전처리 과정에 대하여 서술한 바 있다. 본 논문에서는 전처리 된 영상 데이터를 이용하여 추출해 낼 수 있는 선충의 형태적 특징들을 제시한다. 선충의 크기와 관련한 특징과 자세에 관련한 특징으로 나누어, 각 특징의 추출 알고리즘을 수학적으로 표현하였다. 실험에서 제시된 형태적 특징 정보를 이용하여 직접 분류해 봄으로써 성능을 확인하였다. 분류 알고리즘은 Hierarchical Clustering을 사용하였다. 그 결과 실험에 이용된 선충의 4 종류 모두 90% 이상 옳게 분류되었다.

Plant Species Identification based on Plant Leaf Using Computer Vision and Machine Learning Techniques

  • Kaur, Surleen;Kaur, Prabhpreet
    • Journal of Multimedia Information System
    • /
    • 제6권2호
    • /
    • pp.49-60
    • /
    • 2019
  • Plants are very crucial for life on Earth. There is a wide variety of plant species available, and the number is increasing every year. Species knowledge is a necessity of various groups of society like foresters, farmers, environmentalists, educators for different work areas. This makes species identification an interdisciplinary interest. This, however, requires expert knowledge and becomes a tedious and challenging task for the non-experts who have very little or no knowledge of the typical botanical terms. However, the advancements in the fields of machine learning and computer vision can help make this task comparatively easier. There is still not a system so developed that can identify all the plant species, but some efforts have been made. In this study, we also have made such an attempt. Plant identification usually involves four steps, i.e. image acquisition, pre-processing, feature extraction, and classification. In this study, images from Swedish leaf dataset have been used, which contains 1,125 images of 15 different species. This is followed by pre-processing using Gaussian filtering mechanism and then texture and color features have been extracted. Finally, classification has been done using Multiclass-support vector machine, which achieved accuracy of nearly 93.26%, which we aim to enhance further.

비정형의 건설환경 매핑을 위한 레이저 반사광 강도와 주변광을 활용한 향상된 라이다-관성 슬램 (Intensity and Ambient Enhanced Lidar-Inertial SLAM for Unstructured Construction Environment)

  • 정민우;정상우;장혜수;김아영
    • 로봇학회논문지
    • /
    • 제16권3호
    • /
    • pp.179-188
    • /
    • 2021
  • Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is challenging due to the characteristics of an unstructured environment. For example, irregular feature points and matching prohibit creating a map for management. To tackle this issue, we propose a system for data acquisition in unstructured environment and a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping, IA-LIO-SAM, that achieves highly accurate robot trajectories and mapping. IA-LIO-SAM utilizes a factor graph same as Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM). Enhancing the existing LIO-SAM, IA-LIO-SAM leverages point's intensity and ambient value to remove unnecessary feature points. These additional values also perform as a new factor of the K-Nearest Neighbor algorithm (KNN), allowing accurate comparisons between stored points and scanned points. The performance was verified in three different environments and compared with LIO-SAM.

영상 화질 평가 딥러닝 모델 재검토: 스트라이드 컨볼루션이 풀링보다 좋은가? (Revisiting Deep Learning Model for Image Quality Assessment: Is Strided Convolution Better than Pooling?)

  • 우딘 에이에프엠 사합;정태충;배성호
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 추계학술대회
    • /
    • pp.29-32
    • /
    • 2020
  • Due to the lack of improper image acquisition process, noise induction is an inevitable step. As a result, objective image quality assessment (IQA) plays an important role in estimating the visual quality of noisy image. Plenty of IQA methods have been proposed including traditional signal processing based methods as well as current deep learning based methods where the later one shows promising performance due to their complex representation ability. The deep learning based methods consists of several convolution layers and down sampling layers for feature extraction and fully connected layers for regression. Usually, the down sampling is performed by using max-pooling layer after each convolutional block. We reveal that this max-pooling causes information loss despite of knowing their importance. Consequently, we propose a better IQA method that replaces the max-pooling layers with strided convolutions to down sample the feature space and since the strided convolution layers have learnable parameters, they preserve optimal features and discard redundant information, thereby improve the prediction accuracy. The experimental results verify the effectiveness of the proposed method.

  • PDF

Human hand gesture identification framework using SIFT and knowledge-level technique

  • Muhammad Haroon;Saud Altaf;Zia-ur- Rehman;Muhammad Waseem Soomro;Sofia Iqbal
    • ETRI Journal
    • /
    • 제45권6호
    • /
    • pp.1022-1034
    • /
    • 2023
  • In this study, the impact of varying lighting conditions on recognition and decision-making was considered. The luminosity approach was presented to increase gesture recognition performance under varied lighting. An efficient framework was proposed for sensor-based sign language gesture identification, including picture acquisition, preparing data, obtaining features, and recognition. The depth images were collected using multiple Microsoft Kinect devices, and data were acquired by varying resolutions to demonstrate the idea. A case study was designed to attain acceptable accuracy in gesture recognition under variant lighting. Using American Sign Language (ASL), the dataset was created and analyzed under various lighting conditions. In ASL-based images, significant feature points were selected using the scale-invariant feature transformation (SIFT). Finally, an artificial neural network (ANN) classified hand gestures using specified characteristics for validation. The suggested method was successful across a variety of illumination conditions and different image sizes. The total effectiveness of NN architecture was shown by the 97.6% recognition accuracy rate of 26 alphabets dataset with just a 2.4% error rate.

구름이 포함된 고해상도 다시기 위성영상의 자동 상호등록 (Automatic Co-registration of Cloud-covered High-resolution Multi-temporal Imagery)

  • 한유경;김용일;이원희
    • 대한공간정보학회지
    • /
    • 제21권4호
    • /
    • pp.101-107
    • /
    • 2013
  • 일반적으로 상용화되고 있는 고해상도 위성영상에는 좌표가 부여되어 있지만, 촬영 당시 센서의 자세나 지표면 특성 등에 따라서 영상 간의 지역적인 위치차이가 발생한다. 따라서 좌표를 일치시켜주는 영상 간 상호등록 과정이 필수적으로 적용되어야 한다. 하지만 영상 내에 구름이 분포할 경우 두 영상 간의 정합쌍을 추출하는데 어려움을 주며, 오정합쌍을 다수 추출하는 경향을 보인다. 이에 본 연구에서는 구름이 포함된 고해상도 KOMPSAT-2 영상간의 자동 기하보정을 수행하기 위한 방법론을 제안한다. 대표적인 특징기반 정합쌍 추출 기법인 SIFT 기법을 이용하였고, 기준영상의 특징점을 기준으로 원형 버퍼를 생성하여, 오직 버퍼 내에 존재하는 대상영상의 특징점만을 후보정합쌍으로 선정하여 정합률을 높이고자 하였다. 제안 기법을 구름이 포함된 다양한 실험지역에 적용한 결과, SIFT 기법에 비해 높은 정합률을 보였고, 상호등록 정확도를 향상시킴을 확인할 수 있었다.

고해상도 광학영상과 SAR 영상 간 정합 기법 (Registration Method between High Resolution Optical and SAR Images)

  • 전형주;김용일
    • 대한원격탐사학회지
    • /
    • 제34권5호
    • /
    • pp.739-747
    • /
    • 2018
  • 다중센서 위성영상 간 통합 분석 및 융합과 관련된 연구가 활발히 진행되고 있다. 이를 위해서는 다중센서 영상 간 정합이 선행되어야 한다. 대표적인 정합 기법으로는 SIFT (Scale Invariant Feature Transform)와 같은 알고리즘이 존재한다. 그러나, 광학영상과 SAR (Synthetic Aperture Radar)영상은 취득 시 센서 자세와 방사 특성의 상이함으로 영상 간 분광적인 특성이 비선형성을 이뤄 기존 기법을 적용하기에 어렵다. 이를 해결하기 위해, 본 연구에서는 특징기반 정합기법인 SAR-SIFT (Scale Invariant Feature Transform)와 형상 서술자 벡터 DLSS (Dense Local Self-Similarity)를 결합하여 개선된 영상 정합기법을 제안하였다. 본 실험 지역은 대전 일대에서 촬영된 KOMPSAT-2 영상과 Cosmo-SkyMed 영상을 이용하여 실험하였다. 제안 기법을 비교평가하기 위해 특징점 및 정합쌍 추출에 대해 대표적인 기존 기법인 SIFT와 SAR-SIFT를 이용하였다. 실험 결과를 통해 제안 기법은 기존 기법들과 다르게 두 실험 지역에서 참정합쌍을 추출하였다. 또한 추출된 정합쌍을 통한 정합 결과 정성적으로 우수하게 정합되었으며, 정량적으로도 두 실험 지역에서 각각 RMSE (Root Mean Square Error) 1.66 m, 2.65 m로 우수한 정합 결과를 보였다.

Virtual Metrology for predicting $SiO_2$ Etch Rate Using Optical Emission Spectroscopy Data

  • Kim, Boom-Soo;Kang, Tae-Yoon;Chun, Sang-Hyun;Son, Seung-Nam;Hong, Sang-Jeen
    • 한국진공학회:학술대회논문집
    • /
    • 한국진공학회 2009년도 제38회 동계학술대회 초록집
    • /
    • pp.464-464
    • /
    • 2010
  • A few years ago, for maintaining high stability and production yield of production equipment in a semiconductor fab, on-line monitoring of wafers is required, so that semiconductor manufacturers are investigating a software based process controlling scheme known as virtual metrology (VM). As semiconductor technology develops, the cost of fabrication tool/facility has reached its budget limit, and reducing metrology cost can obviously help to keep semiconductor manufacturing cost. By virtue of prediction, VM enables wafer-level control (or even down to site level), reduces within-lot variability, and increases process capability, $C_{pk}$. In this research, we have practiced VM on $SiO_2$ etch rate with optical emission spectroscopy(OES) data acquired in-situ while the process parameters are simultaneously correlated. To build process model of $SiO_2$ via, we first performed a series of etch runs according to the statistically designed experiment, called design of experiments (DOE). OES data are automatically logged with etch rate, and some OES spectra that correlated with $SiO_2$ etch rate is selected. Once the feature of OES data is selected, the preprocessed OES spectra is then used for in-situ sensor based VM modeling. ICP-RIE using 葰.56MHz, manufactured by Plasmart, Ltd. is employed in this experiment, and single fiber-optic attached for in-situ OES data acquisition. Before applying statistical feature selection, empirical feature selection of OES data is initially performed in order not to fall in a statistical misleading, which causes from random noise or large variation of insignificantly correlated responses with process itself. The accuracy of the proposed VM is still need to be developed in order to successfully replace the existing metrology, but it is no doubt that VM can support engineering decision of "go or not go" in the consecutive processing step.

  • PDF

조합기법을 이용한 다중생체신호의 특징추출에 의한 실시간 인증시스템 개발 (Development of Real-Time Verification System by Features Extraction of Multimodal Biometrics Using Hybrid Method)

  • 조용현
    • 한국산업융합학회 논문집
    • /
    • 제9권4호
    • /
    • pp.263-268
    • /
    • 2006
  • This paper presents a real-time verification system by extracting a features of multimodal biometrics using hybrid method, which is combined the moment balance and the independent component analysis(ICA). The moment balance is applied to reduce the computation loads by extracting the validity signal due to exclude the needless backgrounds of multimodal biometrics. ICA is also applied to increase the verification performance by removing the overlapping signals due to extract the statistically independent basis of signals. Multimodal biometrics are used both the faces and the fingerprints which are acquired by Web camera and acquisition device, respectively. The proposed system has been applied to the fusion problems of 48 faces and 48 fingerprints(24 persons * 2 scenes) of 320*240 pixels, respectively. The experimental results show that the proposed system has a superior verification performances(speed, rate).

  • PDF

AUTOMATIC ROAD NETWORK EXTRACTION. USING LIDAR RANGE AND INTENSITY DATA

  • Kim, Moon-Gie;Cho, Woo-Sug
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.79-82
    • /
    • 2005
  • Recently the necessity of road data is still being increased in industrial society, so there are many repairing and new constructions of roads at many areas. According to the development of government, city and region, the update and acquisition of road data for GIS (Geographical Information System) is very necessary. In this study, the fusion method with range data(3D Ground Coordinate System Data) and Intensity data in stand alone LiDAR data is used for road extraction and then digital image processing method is applicable. Up to date Intensity data of LiDAR is being studied. This study shows the possibility method for road extraction using Intensity data. Intensity and Range data are acquired at the same time. Therefore LiDAR does not have problems of multi-sensor data fusion method. Also the advantage of intensity data is already geocoded, same scale of real world and can make ortho-photo. Lastly, analysis of quantitative and quality is showed with extracted road image which compare with I: 1,000 digital map.

  • PDF