• Title/Summary/Keyword: Field Extraction Algorithm

Search Result 167, Processing Time 0.024 seconds

Stable and Precise Multi-Lane Detection Algorithm Using Lidar in Challenging Highway Scenario (어려운 고속도로 환경에서 Lidar를 이용한 안정적이고 정확한 다중 차선 인식 알고리즘)

  • Lee, Hanseul;Seo, Seung-Woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.158-164
    • /
    • 2015
  • Lane detection is one of the key parts among autonomous vehicle technologies because lane keeping and path planning are based on lane detection. Camera is used for lane detection but there are severe limitations such as narrow field of view and effect of illumination. On the other hands, Lidar sensor has the merits of having large field of view and being little influenced by illumination because it uses intensity information. Existing researches that use methods such as Hough transform, histogram hardly handle multiple lanes in the co-occuring situation of lanes and road marking. In this paper, we propose a method based on RANSAC and regularization which provides a stable and precise detection result in the co-occuring situation of lanes and road marking in highway scenarios. This is performed by precise lane point extraction using circular model RANSAC and regularization aided least square fitting. Through quantitative evaluation, we verify that the proposed algorithm is capable of multi lane detection with high accuracy in real-time on our own acquired road data.

Extraction of Lumbar Multifidus Muscle using Ultrasound Imaging (초음파 영상에서 다열근 추출)

  • Kim, Kwang-Baek;Shin, Sang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.2
    • /
    • pp.55-60
    • /
    • 2011
  • In this paper, we propose a new method for extracting muscles from lumbar images. The proposed method sets areas without distortions with field expert's assistance as areas of measuring interest and removing noises from initial ultrasonic videos. Then, the method emphasizes the brightness contrast with Ends-in search stretching algorithm and separate thoracic vertebra from subcutaneous fat area using morphological characteristics. 4-directions contour tracing algorithm is applied to extract the bottom of subcutaneous fat area. Extracting thoracic vertebra area requires noise removal and morphological characteristics as well among candidate areas obtained by controlling min-max brightness. The thickness of muscles is then defined as the length between subcutaneous fat area and extracted thoracic vertebra. The experiment which consists of 368 image analysis verifies that the proposed method is more effective in measuring the thickness of muscles than before.

Extraction of Flow Velocity Information using Direct Wave and Application of Waveform Inversion Considering Flow Velocity (직접파를 이용한 배경매질 유속정보 도출과 유속을 고려한 파형역산의 적용)

  • Lee, Dawoon;Chung, Wookeen;Shin, Sungryul;Bae, Ho Seuk
    • Geophysics and Geophysical Exploration
    • /
    • v.20 no.4
    • /
    • pp.199-206
    • /
    • 2017
  • Field data obtained from marine exploration are influenced by various environmental factors such as wind, waves, tidal current and flow velocity of a background medium. Most environmental factors except for the flow velocity are properly corrected in the data processing stage. In this study, the wave equation modeling considering flow velocity is used to generate observation data, and numerical experiments using the observation data were conducted to analyze the effect of flow velocity on waveform inversion. The numerical examples include the results with unrealistic flow velocities. In addition, an algorithm is suggested to numerically extract flow velocity for waveform inversion. The proposed algorithm was applied to the modified Marmousi2 model to obtain the results depending on the flow velocity. The effect of flow velocity on updated physical properties was verified by comparing the inversion results without considering flow velocity and those obtained from the proposed algorithm.

Study of Structure Modeling from Terrestrial LIDAR Data (지상라이다 데이터를 이용한 구조물 모델링 기법 연구)

  • Lee, Kyung-Keun;Jung, Kyeong-Hoon;Kim, Ki-Doo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • In this paper, we propose a new structure modeling algorithm from 3D cloud points of terrestrial LADAR data. Terrestrial LIDAR data have various obstacles which make it difficult to apply conventional algorithms designed for air-borne LIDAR data. In the proposed algorithm, the field data are separated into several clusters by adopting the structure extraction method which uses color information and Hough transform. And cluster based Delaunay triangulation technique is sequentially applied to model the artificial structure. Each cluster has its own priority and it makes possible to determine whether a cluster needs to be considered not. The proposed algorithm not only minimizes the effects of noise data but also interactively controls the level of modeling by using cluster-based approach.

A Supervised Feature Selection Method for Malicious Intrusions Detection in IoT Based on Genetic Algorithm

  • Saman Iftikhar;Daniah Al-Madani;Saima Abdullah;Ammar Saeed;Kiran Fatima
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.49-56
    • /
    • 2023
  • Machine learning methods diversely applied to the Internet of Things (IoT) field have been successful due to the enhancement of computer processing power. They offer an effective way of detecting malicious intrusions in IoT because of their high-level feature extraction capabilities. In this paper, we proposed a novel feature selection method for malicious intrusion detection in IoT by using an evolutionary technique - Genetic Algorithm (GA) and Machine Learning (ML) algorithms. The proposed model is performing the classification of BoT-IoT dataset to evaluate its quality through the training and testing with classifiers. The data is reduced and several preprocessing steps are applied such as: unnecessary information removal, null value checking, label encoding, standard scaling and data balancing. GA has applied over the preprocessed data, to select the most relevant features and maintain model optimization. The selected features from GA are given to ML classifiers such as Logistic Regression (LR) and Support Vector Machine (SVM) and the results are evaluated using performance evaluation measures including recall, precision and f1-score. Two sets of experiments are conducted, and it is concluded that hyperparameter tuning has a significant consequence on the performance of both ML classifiers. Overall, SVM still remained the best model in both cases and overall results increased.

A Technical Approach for Suggesting Research Directions in Telecommunications Policy

  • Oh, Junseok;Lee, Bong Gyou
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.12
    • /
    • pp.4467-4488
    • /
    • 2014
  • The bibliometric analysis is widely used for understanding research domains, trends, and knowledge structures in a particular field. The analysis has majorly been used in the field of information science, and it is currently applied to other academic fields. This paper describes the analysis of academic literatures for classifying research domains and for suggesting empty research areas in the telecommunications policy. The application software is developed for retrieving Thomson Reuters' Web of Knowledge (WoK) data via web services. It also used for conducting text mining analysis from contents and citations of publications. We used three text mining techniques: the Keyword Extraction Algorithm (KEA) analysis, the co-occurrence analysis, and the citation analysis. Also, R software is used for visualizing the term frequencies and the co-occurrence network among publications. We found that policies related to social communication services, the distribution of telecommunications infrastructures, and more practical and data-driven analysis researches are conducted in a recent decade. The citation analysis results presented that the publications are generally received citations, but most of them did not receive high citations in the telecommunications policy. However, although recent publications did not receive high citations, the productivity of papers in terms of citations was increased in recent ten years compared to the researches before 2004. Also, the distribution methods of infrastructures, and the inequity and gap appeared as topics in important references. We proposed the necessity of new research domains since the analysis results implies that the decrease of political approaches for technical problems is an issue in past researches. Also, insufficient researches on policies for new technologies exist in the field of telecommunications. This research is significant in regard to the first bibliometric analysis with abstracts and citation data in telecommunications as well as the development of software which has functions of web services and text mining techniques. Further research will be conducted with Big Data techniques and more text mining techniques.

Dilated convolution and gated linear unit based sound event detection and tagging algorithm using weak label (약한 레이블을 이용한 확장 합성곱 신경망과 게이트 선형 유닛 기반 음향 이벤트 검출 및 태깅 알고리즘)

  • Park, Chungho;Kim, Donghyun;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.414-423
    • /
    • 2020
  • In this paper, we propose a Dilated Convolution Gate Linear Unit (DCGLU) to mitigate the lack of sparsity and small receptive field problems caused by the segmentation map extraction process in sound event detection with weak labels. In the advent of deep learning framework, segmentation map extraction approaches have shown improved performance in noisy environments. However, these methods are forced to maintain the size of the feature map to extract the segmentation map as the model would be constructed without a pooling operation. As a result, the performance of these methods is deteriorated with a lack of sparsity and a small receptive field. To mitigate these problems, we utilize GLU to control the flow of information and Dilated Convolutional Neural Networks (DCNNs) to increase the receptive field without additional learning parameters. For the performance evaluation, we employ a URBAN-SED and self-organized bird sound dataset. The relevant experiments show that our proposed DCGLU model outperforms over other baselines. In particular, our method is shown to exhibit robustness against nature sound noises with three Signal to Noise Ratio (SNR) levels (20 dB, 10 dB and 0 dB).

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

Open Platform for Improvement of e-Health Accessibility (의료정보서비스 접근성 향상을 위한 개방형 플랫폼 구축방안)

  • Lee, Hyun-Jik;Kim, Yoon-Ho
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1341-1346
    • /
    • 2017
  • In this paper, we designed the open service platform based on integrated type of individual customized service and intelligent information technology with individual's complex attributes and requests. First, the data collection phase is proceed quickly and accurately to repeat extraction, transformation and loading. The generated data from extraction-transformation-loading process module is stored in the distributed data system. The data analysis phase is generated a variety of patterns that used the analysis algorithm in the field. The data processing phase is used distributed parallel processing to improve performance. The data providing should operate independently on device-specific management platform. It provides a type of the Open API.

Analysis of patent trends of computerized tongue diagnosis systems (설진 시스템 특허동향 분석)

  • Jung, Chang Jin;Lee, Yu Jung;Kim, Jaeuk U.;Kim, Keun Ho
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.17 no.2
    • /
    • pp.77-89
    • /
    • 2013
  • Objectives Tongue diagnosis is an important diagnostic method in traditional Eastern medicine, and it has a high potential to be used in the future healthcare because of easy, quick, and non-contact measuring features. Recently, research and development efforts on computerized tongue diagnosis systems (CTDS) have been active that led to the technical advancements in the field of photographing techniques, image extraction and classification algorithms. In this study, we analyzed the trends in the CTDS patents. Using the WIPS search engine (www.wipsglobal.com), quantitative and qualitative patent analyses were performed through Korea, China, Japan, U.S.A and Europe. Methods For a systematic search and data analysis, we defined patent categories based on the application area and technical details. By applying thus-obtained categorical key words, we obtained 360 relevant patents on photographing techniques, image extraction and classification algorithms for the purpose of diagnosis or security. Results As a result, companies related to image acquisition, medical imaging and mobile devices and research groups of universities in East Asia were major patent applicants. In all the five countries, the number of patents have been increasing since 1980. In particular, technology related to color correction and image segmentation were most actively patented categories, and expected to continue a high application rate.