• Title/Summary/Keyword: 자동정보 추출

Search Result 1,995, Processing Time 0.035 seconds

Development of the Algofithm for Gaussian Mixture Models based Traffic Accident Auto-Detection in Freeway (GMM(Gaussian Mixture Model)을 적용한 영상처리기법의 연속류도로 사고 자동검지 알고리즘 개발)

  • O, Ju-Taek;Im, Jae-Geuk;Yeo, Tae-Dong
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.3
    • /
    • pp.169-183
    • /
    • 2010
  • Image-based traffic information collection systems have entered widespread adoption and use in many countries since these systems are not only capable of replacing existing loop-based detectors which have limitations in management and administration, but are also capable of providing and managing a wide variety of traffic related information. In addition, these systems are expanding rapidly in terms of purpose and scope of use. Currently, the utilization of image processing technology in the field of traffic accident management is limited to installing surveillance cameras on locations where traffic accidents are expected to occur and digitalizing of recorded data. Accurately recording the sequence of situations around a traffic accident in a freeway and then objectively and clearly analyzing how such accident occurred is more urgent and important than anything else in resolving a traffic accident. Therefore, in this research, existing technologies, this freeway attribute, velocity changes, volume changes, occupancy changes reflect judge the primary. Furthermore, We pointed out by many past researches while presenting and implementing an active and environmentally adaptive methodology capable of effectively reducing false detection situations which frequently occur even with the Gaussian Mixture model analytical method which has been considered the best among well-known environmental obstacle reduction methods. Therefore, in this way, the accident was the final decision. Also, environmental factors occur frequently, and with the index finger situations, effectively reducing that can actively and environmentally adaptive techniques through accident final judgment. This implementation of the evaluate performance of the experiment road of 12 incidents in simulated and the jang-hang IC's real-time accident experiment. As a result, the do well detection 93.33%, false alarm 6.7% as showed high reliability.

A Practical Feature Extraction for Improving Accuracy and Speed of IDS Alerts Classification Models Based on Machine Learning (기계학습 기반 IDS 보안이벤트 분류 모델의 정확도 및 신속도 향상을 위한 실용적 feature 추출 연구)

  • Shin, Iksoo;Song, Jungsuk;Choi, Jangwon;Kwon, Taewoong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.2
    • /
    • pp.385-395
    • /
    • 2018
  • With the development of Internet, cyber attack has become a major threat. To detect cyber attacks, intrusion detection system(IDS) has been widely deployed. But IDS has a critical weakness which is that it generates a large number of false alarms. One of the promising techniques that reduce the false alarms in real time is machine learning. However, there are problems that must be solved to use machine learning. So, many machine learning approaches have been applied to this field. But so far, researchers have not focused on features. Despite the features of IDS alerts are important for performance of model, the approach to feature is ignored. In this paper, we propose new feature set which can improve the performance of model and can be extracted from a single alarm. New features are motivated from security analyst's know-how. We trained and tested the proposed model applied new feature set with real IDS alerts. Experimental results indicate the proposed model can achieve better accuracy and false positive rate than SVM model with ordinary features.

Adaptive Image Content-Based Retrieval Techniques for Multiple Queries (다중 질의를 위한 적응적 영상 내용 기반 검색 기법)

  • Hong Jong-Sun;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.3 s.303
    • /
    • pp.73-80
    • /
    • 2005
  • Recently there have been many efforts to support searching and browsing based on the visual content of image and multimedia data. Most existing approaches to content-based image retrieval rely on query by example or user based low-level features such as color, shape, texture. But these methods of query are not easy to use and restrict. In this paper we propose a method for automatic color object extraction and labelling to support multiple queries of content-based image retrieval system. These approaches simplify the regions within images using single colorizing algorithm and extract color object using proposed Color and Spatial based Binary tree map(CSB tree map). And by searching over a large of number of processed regions, a index for the database is created by using proposed labelling method. This allows very fast indexing of the image by color contents of the images and spatial attributes. Futhermore, information about the labelled regions, such as the color set, size, and location, enables variable multiple queries that combine both color content and spatial relationships of regions. We proved our proposed system to be high performance through experiment comparable with another algorithm using 'Washington' image database.

LiDAR Ground Classification Enhancement Based on Weighted Gradient Kernel (가중 경사 커널 기반 LiDAR 미추출 지형 분류 개선)

  • Lee, Ho-Young;An, Seung-Man;Kim, Sung-Su;Sung, Hyo-Hyun;Kim, Chang-Hun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.29-33
    • /
    • 2010
  • The purpose of LiDAR ground classification is to archive both goals which are acquiring confident ground points with high precision and describing ground shape in detail. In spite of many studies about developing optimized algorithms to kick out this, it is very difficult to classify ground points and describing ground shape by airborne LiDAR data. Especially it is more difficult in a dense forested area like Korea. Principle misclassification was mainly caused by complex forest canopy hierarchy in Korea and relatively coarse LiDAR points density for ground classification. Unfortunately, a lot of LiDAR surveying performed in summer in South Korea. And by that reason, schematic LiDAR points distribution is very different from those of Europe. So, this study propose enhanced ground classification method considering Korean land cover characteristics. Firstly, this study designate highly confident candidated LiDAR points as a first ground points which is acquired by using big roller classification algorithm. Secondly, this study applied weighted gradient kernel(WGK) algorithm to find and include highly expected ground points from the remained candidate points. This study methods is very useful for reconstruct deformed terrain due to misclassification results by detecting and include important terrain model key points for describing ground shape at site. Especially in the case of deformed bank side of river area, this study showed highly enhanced classification and reconstruction results by using WGK algorithm.

Development of Damage Evaluation Technology Considering Variability for Cable Damage Detection of Cable-Stayed Bridges (사장교의 케이블 손상 검출을 위한 변동성이 고려된 손상평가 기술 개발)

  • Ko, Byeong-Chan;Heo, Gwang-Hee;Park, Chae-Rin;Seo, Young-Deuk;Kim, Chung-Gil
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.24 no.6
    • /
    • pp.77-84
    • /
    • 2020
  • In this paper, we developed a damage evaluation technique that can determine the damage location of a long-sized structure such as a cable-stayed bridge, and verified the performance of the developed technique through experiments. The damage assessment method aims to extract data that can evaluate the damage of the structure without the undamage data and can determine the damage location only by analyzing the response data of the structure. To complete this goal, we developed a damage assessment technique that considers variability based on the IMD theory, which is a statistical pattern recognition technique, to identify the damage location. To complete this goal, we developed a damage assessment technique that considers variability based on the IMD theory, which is a statistical pattern recognition technique, to identify the damage location. To evaluate the performance of the developed technique experimentally, cable damage experiments were conducted on model cable-stayed bridges. As a result, the damage assessment method considering variability automatically outputs the damageless data according to external force, and it is confirmed that the performance of extracting information that can determine the damage location of the cable through the analysis of the outputted damageless data and the measured damage data is shown.

Development of an AutoML Web Platform for Text Classification Automation (텍스트 분류 자동화를 위한 AutoML 웹 플랫폼 개발)

  • Ha-Yoon Song;Jeon-Seong Kang;Beom-Joon Park;Junyoung Kim;Kwang-Woo Jeon;Junwon Yoon;Hyun-Joon Chung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.10
    • /
    • pp.537-544
    • /
    • 2024
  • The rapid advancement of artificial intelligence and machine learning technologies is driving innovation across various industries, with natural language processing offering substantial opportunities for the analysis and processing of text data. The development of effective text classification models requires several complex stages, including data exploration, preprocessing, feature extraction, model selection, hyperparameter optimization, and performance evaluation, all of which demand significant time and domain expertise. Automated machine learning (AutoML) aims to automate these processes, thus allowing practitioners without specialized knowledge to develop high-performance models efficiently. However, current AutoML frameworks are primarily designed for structured data, which presents challenges for unstructured text data, as manual intervention is often required for preprocessing and feature extraction. To address these limitations, this study proposes a web-based AutoML platform that automates text preprocessing, word embedding, model training, and evaluation. The proposed platform substantially enhances the efficiency of text classification workflows by enabling users to upload text data, automatically generate the optimal ML model, and visually present performance metrics. Experimental results across multiple text classification datasets indicate that the proposed platform achieves high levels of accuracy and precision, with particularly notable performance when utilizing a Stacked Ensemble approach. This study highlights the potential for non-experts to effectively analyze and leverage text data through automated text classification and outlines future directions to further enhance performance by integrating Large language models.

RAUT: An end-to-end tool for automated parsing and uploading river cross-sectional survey in AutoCAD format to river information system for supporting HEC-RAS operation (하천정비기본계획 CAD 형식 단면 측량자료 자동 추출 및 하천공간 데이터베이스 업로딩과 HEC-RAS 지원을 위한 RAUT 툴 개발)

  • Kim, Kyungdong;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.12
    • /
    • pp.1339-1348
    • /
    • 2021
  • In accordance with the River Law, the basic river maintenance plan is established every 5-10 years with a considerable national budget for domestic rivers, and various river surveys such as the river section required for HEC-RAS simulation for flood level calculation are being conducted. However, river survey data are provided only in the form of a pdf report to the River Management Geographic Information System (RIMGIS), and the original data are distributedly owned by designers who performed the river maintenance plan in CAD format. It is a situation that the usability for other purposes is considerably lowered. In addition, when using surveyed CAD-type cross-sectional data for HEC-RAS, tools such as 'Dream' are used, but the reality is that time and cost are almost as close as manual work. In this study, RAUT (River Information Auto Upload Tool), a tool that can solve these problems, was developed. First, the RAUT tool attempted to automate the complicated steps of manually inputting CAD survey data and simulating the input data of the HEC-RAS one-dimensional model used in establishing the basic river plan in practice. Second, it is possible to directly read CAD survey data, which is river spatial information, and automatically upload it to the river spatial information DB based on the standard data model (ArcRiver), enabling the management of river survey data in the river maintenance plan at the national level. In other words, if RIMGIS uses a tool such as RAUT, it will be able to systematically manage national river survey data such as river section. The developed RAUT reads the river spatial information CAD data of the river maintenance master plan targeting the Jeju-do agar basin, builds it into a mySQL-based spatial DB, and automatically generates topographic data for HEC-RAS one-dimensional simulation from the built DB. A pilot process was implemented.

Developing a Korean Standard Brain Atlas on the basis of Statistical and Probabilistic Approach and Visualization tool for Functional image analysis (확률 및 통계적 개념에 근거한 한국인 표준 뇌 지도 작성 및 기능 영상 분석을 위한 가시화 방법에 관한 연구)

  • Koo, B.B.;Lee, J.M.;Kim, J.S.;Lee, J.S.;Kim, I.Y.;Kim, J.J.;Lee, D.S.;Kwon, J.S.;Kim, S.I.
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.3
    • /
    • pp.162-170
    • /
    • 2003
  • The probabilistic anatomical maps are used to localize the functional neuro-images and morphological variability. The quantitative indicator is very important to inquire the anatomical position of an activated legion because functional image data has the low-resolution nature and no inherent anatomical information. Although previously developed MNI probabilistic anatomical map was enough to localize the data, it was not suitable for the Korean brains because of the morphological difference between Occidental and Oriental. In this study, we develop a probabilistic anatomical map for Korean normal brain. Normal 75 blains of T1-weighted spoiled gradient echo magnetic resonance images were acquired on a 1.5-T GESIGNA scanner. Then, a standard brain is selected in the group through a clinician searches a brain of the average property in the Talairach coordinate system. With the standard brain, an anatomist delineates 89 regions of interest (ROI) parcellating cortical and subcortical areas. The parcellated ROIs of the standard are warped and overlapped into each brain by maximizing intensity similarity. And every brain is automatically labeledwith the registered ROIs. Each of the same-labeled region is linearly normalize to the standard brain, and the occurrence of each legion is counted. Finally, 89 probabilistic ROI volumes are generated. This paper presents a probabilistic anatomical map for localizing the functional and structural analysis of Korean normal brain. In the future, we'll develop the group specific probabilistic anatomical maps of OCD and schizophrenia disease.

The Analysis of Parcels for Land Alternation in Jinan-Gun jeollabuk-Do based on GIS (GIS 기반 전라북도 진안군의 토지이동 필지 분석)

  • Lee, Geun Sang;Park, Jong Ahn;Cho, Gi Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.1
    • /
    • pp.3-12
    • /
    • 2014
  • Cadastre is a set of activity registering diverse land information in national scope land management works. A nation examine land information and register it in a cadastral book, and must update data when necessary to properly maintain the information. Currently, local governments execute work about parcels of land alternation by manual work based on KLIS road map. Therefore, it takes too much time-consuming and makes problem as missing lots of parcels of land alternation. This study suggests the method selecting the parcels of land alteration for Jinan-Gun of Jeollabuk-Do using the GIS spatial overlay and the following results are as belows. Firstly, the manual work on the parcels of land alteration was greatly improved through automatically extracting the number and area of parcels according to the land classification and ownership by GIS spatial overlay based on serial cadastral maps and KLIS road lines. Secondly, existing work based on KLIS road lines could be advanced by analyzing the parcels of land alternation using the actual-width of the road from new address system to consider all road area for study site. Lastly, this study can supply efficient information in determining the parcels of land alternation consistant with road condition of local governments by analyzing the number and area of parcels according to the land classification and ownership within various roadsides ranging from 3m, 5m, and 10m by GIS buffering method.

Building Large-scale CityGML Feature for Digital 3D Infrastructure (디지털 3D 인프라 구축을 위한 대규모 CityGML 객체 생성 방법)

  • Jang, Hanme;Kim, HyunJun;Kang, HyeYoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.187-201
    • /
    • 2021
  • Recently, the demand for a 3D urban spatial information infrastructure for storing, operating, and analyzing a large number of digital data produced in cities is increasing. CityGML is a 3D spatial information data standard of OGC (Open Geospatial Consortium), which has strengths in the exchange and attribute expression of city data. Cases of constructing 3D urban spatial data in CityGML format has emerged on several cities such as Singapore and New York. However, the current ecosystem for the creation and editing of CityGML data is limited in constructing CityGML data on a large scale because of lack of completeness compared to commercial programs used to construct 3D data such as sketchup or 3d max. Therefore, in this study, a method of constructing CityGML data is proposed using commercial 3D mesh data and 2D polygons that are rapidly and automatically produced through aerial LiDAR (Light Detection and Ranging) or RGB (Red Green Blue) cameras. During the data construction process, the original 3D mesh data was geometrically transformed so that each object could be expressed in various CityGML LoD (Levels of Detail), and attribute information extracted from the 2D spatial information data was used as a supplement to increase the utilization as spatial information. The 3D city features produced in this study are CityGML building, bridge, cityFurniture, road, and tunnel. Data conversion for each feature and property construction method were presented, and visualization and validation were conducted.