• Title/Summary/Keyword: 자동정보 추출

Search Result 2,000, Processing Time 0.034 seconds

Dangerous Abandoned Object Extraction Model Using Area Variation Characteristics (면적의 변화 특성을 이용한 위험 유기물 형상 추출 모델)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.8
    • /
    • pp.39-45
    • /
    • 2020
  • Recently the terrors have been attempted in the public places of the nations such as United states, England and Japan by explosive things, toxic materials and so on. It is understood that the method in which dangerous objects are put in public places is one of the difficult types in detection. While there are the cameras recording videos for many spots in public places, it is very hard for the security personnel to monitor every videos. Nowadays the smart softwares which can analyzing videos automatically are utilized to detect abandoned objects. The method by Lin et al. shows comparatively high detection rates for abandoned objects but it is not easy to obtain the shape information because there is a tendency that the number of the pixels decreases abruptly along the time goes due to the characteristics of short-term background images. In this research a novel method is proposed to successfully extract the shape of the abandoned object by analysing the characteristics of area variation. The experiment results show that the proposed method has better performance in extracting shape information in comparison with the precedent approach.

A Semantics-based Video Retrieval System using Annotation and Feature (주석 및 특징을 이용한 의미기반 비디오 검색 시스템)

  • 이종희
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.4
    • /
    • pp.95-102
    • /
    • 2004
  • In order to process video data effectively, it is required that the content information of video data is loaded in database and semantic-based retrieval method can be available for various query of users. Currently existent contents-based video retrieval systems search by single method such as annotation-based or feature-based retrieval, and show low search efficiency md requires many efforts of system administrator or annotator because of imperfect automatic processing. In this paper, we propose semantics-based video retrieval system which support semantic retrieval of various users by feature-based retrieval and annotation-based retrieval of massive video data. By user's fundamental query and selection of image for key frame that extracted from query, the agent gives the detail shape for annotation of extracted key frame. Also, key frame selected by user become query image and searches the most similar key frame through feature based retrieval method and optimized comparison area extracting that propose. Therefore, we propose the system that can heighten retrieval efficiency of video data through semantics-based retrieval.

Traffic Anomaly Identification Using Multi-Class Support Vector Machine (다중 클래스 SVM을 이용한 트래픽의 이상패턴 검출)

  • Park, Young-Jae;Kim, Gye-Young;Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.4
    • /
    • pp.1942-1950
    • /
    • 2013
  • This paper suggests a new method of detecting attacks of network traffic by visualizing original traffic data and applying multi-class SVM (support vector machine). The proposed method first generates 2D images from IP and ports of transmitters and receivers, and extracts linear patterns and high intensity values from the images, representing traffic attacks. It then obtains variance of ports of transmitters and receivers and extracts the number of clusters and entropy features using ISODATA algorithm. Finally, it determines through multi-class SVM if the traffic data contain DDoS, DoS, Internet worm, or port scans. Experimental results show that the suggested multi-class SVM-based algorithm can more effectively detect network traffic attacks.

Quantitative Estimation of Shoreline Changes Using Multi-sensor Datasets: A Case Study for Bangamoeri Beaches (다중센서를 이용한 해안선의 정량적 변화 추정: 방아머리 해빈을 중심으로)

  • Yun, Kong-Hyun;Song, Yeong Sun
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.693-703
    • /
    • 2019
  • Long-term coastal topographical data is critical for analyzing temporal and spatial changes in shorelines. Especially understanding the change trends is essential for future coastal management. For this research, in the data preparation, we obtained digital aerial images, terrestrial laser scanning data and UAV images in the year of 2009. 2018 and 2019 respectively. Also tidal observation data obtained by the Korea Hydrographic and Oceanographic Agency were used for Bangamoeri beach located in Ansan, Gyeonggi-do. In the process of it, we applied the photogrammetric technique to extract the coastline of 4.40 m from the stereo images of 2009 by stereoscopic viewing. In 2018, digital elevation model was generated by using the raw data obtained from the laser scanner and the corresponding shoreline was semi-automatically extracted. In 2019, a digital elevation model was generated from the drone images to extract the coastline. Finally the change rate of shorelines was calculated using Digital Shoreline Analysis System. Also qualitative analysis was presented.

A Basic Study of Obstacles Extraction on the Road for the Stability of Self-driving Vehicles (자율주행 차량의 안전성을 위한 도로의 장애물 추출에 대한 기초 연구)

  • Park, Chang min
    • Journal of Platform Technology
    • /
    • v.9 no.2
    • /
    • pp.46-54
    • /
    • 2021
  • Recently, interest in the safety of Self-driving has been increasing. Self-driving have been studied and developed by many universities, research centers, car companies, and companies of other industries around the world since the middle 1980s. In this study, we propose the automatic extraction method of the threatening obstacle on the Road for the Self-driving. A threatening obstacle is defined in this study as a comparatively large object at center of the image. First of all, an input image and its decreased resolution images are segmented. Segmented areas are classified as the outer or the inner area. The outer area is adjacent to boundaries of the image and the other is not. Each area is merged with its neighbors when adjacent areas are included by a same area in the decreased resolution image. The Obstacle area and Non Obstacle area are selected from the inner area and outer area respectively. Obstacle areas are the representative areas for the obstacle and are selected by using the information about the area size and location. The Obstacle area and Non Obstacle area consist of the threatening obstacle on the road. Through experiments, we expect that the proposed method will be able to reduce accidents and casualties in Self-driving.

Generation of 3-D City Model using Aerial Imagery (항공사진을 이용한 3차원 도시 모형 생성)

  • Yeu Bock Mo;Jin Kyeong Hyeok;Yoo Hwan Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.23 no.3
    • /
    • pp.233-238
    • /
    • 2005
  • 3-D virtual city model is becoming increasingly important for a number of GIS applications. For reconstruction of 3D building in urban area aerial images, satellite images, LIDAR data have been used mainly and most of researches related to 3-D reconstruction focus on development of method for extraction of building height and reconstruction of building. In case of automatically extracting and reconstructing of building height using only aerial images or satellite images, there are a lot of problems, such as mismatching that result from a geometric distortion of optical images. Therefore, researches of integrating optical images and existing digital map (1/1,000) has been in progress. In this paper, we focused on extracting of building height by means of interest points and vertical line locus method for reducing matching points. Also we used digital plotter in order to validate for the results in this study using aerial images (1/5,000) and existing digital map (1/1,000).

Dynamic ontology construction algorithm from Wikipedia and its application toward real-time nation image analysis (국가이미지 분석을 위한 위키피디아 실시간 동적 온톨로지 구축 알고리즘 및 적용)

  • Lee, Youngwhan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.979-991
    • /
    • 2016
  • Measuring nation images was a challenging task when employing offline surveys was the only option. It was not only prohibitively expensive, but too much time-consuming and therefore unfitted to this rapidly changing world. Although demands for monitoring real-time nation images were ever-increasing, an affordable and reliable solution to measure nation images has not been available up to this date. The researcher in this study developed a semi-automatic ontology construction algorithm, named "double-crossing double keyword collection (or DCDKC)" to measure nation images from Wikipedia in real-time. The ontology, WikiOnto, can be used to reflect dynamic image changes. In this study, an instance of WikiOnto was constructed by applying the algorithm to the big-three exporting countries in East Asia, Korea, Japan, and China. Then, the numbers of page views for words in the instance of WikiOnto were counted. A collection of the counting for each country was compared to each other to inspect the possibility to use for dynamic nation images. As for the conclusion, the result shows how the images of the three countries have changed for the period the study was performed. It confirms that DCDKC can very well be used for a real-time nation-image monitoring system.

LiDAR Chip for Automated Geo-referencing of High-Resolution Satellite Imagery (라이다 칩을 이용한 고해상도 위성영상의 자동좌표등록)

  • Lee, Chang No;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.4_1
    • /
    • pp.319-326
    • /
    • 2014
  • The accurate geo-referencing processes that apply ground control points is prerequisite for effective end use of HRSI (High-resolution satellite imagery). Since the conventional control point acquisition by human operator takes long time, demands for the automated matching to existing reference data has been increasing its popularity. Among many options of reference data, the airborne LiDAR (Light Detection And Ranging) data shows high potential due to its high spatial resolution and vertical accuracy. Additionally, it is in the form of 3-dimensional point cloud free from the relief displacement. Recently, a new matching method between LiDAR data and HRSI was proposed that is based on the image projection of whole LiDAR data into HRSI domain, however, importing and processing the large amount of LiDAR data considered as time-consuming. Therefore, we wmotivated to ere propose a local LiDAR chip generation for the HRSI geo-referencing. In the procedure, a LiDAR point cloud was rasterized into an ortho image with the digital elevation model. After then, we selected local areas, which of containing meaningful amount of edge information to create LiDAR chips of small data size. We tested the LiDAR chips for fully-automated geo-referencing with Kompsat-2 and Kompsat-3 data. Finally, the experimental results showed one-pixel level of mean accuracy.

Determination of Spatial Resolution to Improve GCP Chip Matching Performance for CAS-4 (농림위성용 GCP 칩 매칭 성능 향상을 위한 위성영상 공간해상도 결정)

  • Lee, YooJin;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1517-1526
    • /
    • 2021
  • With the recent global and domestic development of Earth observation satellites, the applications of satellite images have been widened. Research for improving the geometric accuracy of satellite images is being actively carried out. This paper studies the possibility of automated ground control point (GCP) generation for CAS-4 satellite, to be launched in 2025 with the capability of image acquisition at 5 m ground sampling distance (GSD). In particular, this paper focuses to check whether GCP chips with 25 cm GSD established for CAS-1 satellite images can be used for CAS-4 and to check whether optimalspatial resolution for matching between CAS-4 images and GCP chips can be determined to improve matching performance. Experiments were carried out using RapidEye images, which have similar GSD to CAS-4. Original satellite images were upsampled to make satellite images with smaller GSDs. At each GSD level, up-sampled satellite images were matched against GCP chips and precision sensor models were estimated. Results shows that the accuracy of sensor models were improved with images atsmaller GSD compared to the sensor model accuracy established with original images. At 1.25~1.67 m GSD, the accuracy of about 2.4 m was achieved. This finding lead that the possibility of automated GCP extraction and precision ortho-image generation for CAS-4 with improved accuracy.

Ontology-based Automated Metadata Generation Considering Semantic Ambiguity (의미 중의성을 고려한 온톨로지 기반 메타데이타의 자동 생성)

  • Choi, Jung-Hwa;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.11
    • /
    • pp.986-998
    • /
    • 2006
  • There has been an increasing necessity of Semantic Web-based metadata that helps computers efficiently understand and manage an information increased with the growth of Internet. However, it seems inevitable to face some semantically ambiguous information when metadata is generated. Therefore, we need a solution to this problem. This paper proposes a new method for automated metadata generation with the help of a concept of class, in which some ambiguous words imbedded in information such as documents are semantically more related to others, by using probability model of consequent words. We considers ambiguities among defined concepts in ontology and uses the Hidden Markov Model to be aware of part of a named entity. First of all, we constrict a Markov Models a better understanding of the named entity of each class defined in ontology. Next, we generate the appropriate context from a text to understand the meaning of a semantically ambiguous word and solve the problem of ambiguities during generating metadata by searching the optimized the Markov Model corresponding to the sequence of words included in the context. We experiment with seven semantically ambiguous words that are extracted from computer science thesis. The experimental result demonstrates successful performance, the accuracy improved by about 18%, compared with SemTag, which has been known as an effective application for assigning a specific meaning to an ambiguous word based on its context.