• Title/Summary/Keyword: 자동정보 추출

Search Result 1,995, Processing Time 0.03 seconds

Estimation of Monthly Precipitation in North Korea Using PRISM and Digital Elevation Model (PRISM과 상세 지형정보에 근거한 북한지역 강수량 분포 추정)

  • Kim, Dae-Jun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.13 no.1
    • /
    • pp.35-40
    • /
    • 2011
  • While high-definition precipitation maps with a 270 m spatial resolution are available for South Korea, there is little information on geospatial availability of precipitation water for the famine - plagued North Korea. The restricted data access and sparse observations prohibit application of the widely used PRISM (Parameter-elevation Regressions on Independent Slopes Model) to North Korea for fine-resolution mapping of precipitation. A hybrid method which complements the PRISM grid with a sub-grid scale elevation function is suggested to estimate precipitation for remote areas with little data such as North Korea. The fine scale elevation - precipitation regressions for four sloping aspects were derived from 546 observation points in South Korea. A 'virtual' elevation surface at a 270 m grid spacing was generated by inverse distance weighed averaging of the station elevations of 78 KMA (Korea Meteorological Administration) synoptic stations. A 'real' elevation surface made up from both 78 synoptic and 468 automated weather stations (AWS) was also generated and subtracted from the virtual surface to get elevation difference at each point. The same procedure was done for monthly precipitation to get the precipitation difference at each point. A regression analysis was applied to derive the aspect - specific coefficient of precipitation change with a unit increase in elevation. The elevation difference between 'virtual' and 'real' surface was calculated for each 270m grid points across North Korea and the regression coefficients were applied to obtain the precipitation corrections for the PRISM grid. The correction terms are now added to the PRISM generated low resolution (~2.4 km) precipitation map to produce the 270 m high resolution map compatible with those available for South Korea. According to the final product, the spatial average precipitation for entire territory of North Korea is 1,196 mm for a climatological normal year (1971-2000) with standard deviation of 298 mm.

Development of an Image Processing System for the Large Size High Resolution Satellite Images (대용량 고해상 위성영상처리 시스템 개발)

  • 김경옥;양영규;안충현
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.4
    • /
    • pp.376-391
    • /
    • 1998
  • Images from satellites will have 1 to 3 meter ground resolution and will be very useful for analyzing current status of earth surface. An image processing system named GeoWatch with more intelligent image processing algorithms has been designed and implemented to support the detailed analysis of the land surface using high-resolution satellite imagery. The GeoWatch is a valuable tool for satellite image processing such as digitizing, geometric correction using ground control points, interactive enhancement, various transforms, arithmetic operations, calculating vegetation indices. It can be used for investigating various facts such as the change detection, land cover classification, capacity estimation of the industrial complex, urban information extraction, etc. using more intelligent analysis method with a variety of visual techniques. The strong points of this system are flexible algorithm-save-method for efficient handling of large size images (e.g. full scenes), automatic menu generation and powerful visual programming environment. Most of the existing image processing systems use general graphic user interfaces. In this paper we adopted visual program language for remotely sensed image processing for its powerful programmability and ease of use. This system is an integrated raster/vector analysis system and equipped with many useful functions such as vector overlay, flight simulation, 3D display, and object modeling techniques, etc. In addition to the modules for image and digital signal processing, the system provides many other utilities such as a toolbox and an interactive image editor. This paper also presents several cases of image analysis methods with AI (Artificial Intelligent) technique and design concept for visual programming environment.

Feasibility Study on FSIM Index to Evaluate SAR Image Co-registration Accuracy (SAR 영상 정합 정확도 평가를 위한 FSIM 인자 활용 가능성)

  • Kim, Sang-Wan;Lee, Dongjun
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.847-859
    • /
    • 2021
  • Recently, as the number of high-resolution satellite SAR images increases, the demand for precise matching of SAR imagesin change detection and image fusion is consistently increasing. RMSE (Root Mean Square Error) values using GCPs (Ground Control Points) selected by analysts have been widely used for quantitative evaluation of image registration results, while it is difficult to find an approach for automatically measuring the registration accuracy. In this study, a feasibility analysis was conducted on using the FSIM (Feature Similarity) index as a measure to evaluate the registration accuracy. TerraSAR-X (TSX) staring spotlight data collected from various incidence angles and orbit directions were used for the analysis. FSIM was almost independent on the spatial resolution of the SAR image. Using a single SAR image, the FSIM with respect to registration errors was analyzed, then use it to compare with the value estimated from TSX data with different imaging geometry. FSIM index slightly decreased due to the differencesin imaging geometry such as different look angles, different orbit tracks. As the result of analyzing the FSIM value by land cover type, the change in the FSIM index according to the co-registration error was most evident in the urban area. Therefore, the FSIM index calculated in the urban was mostsuitable for determining the accuracy of image registration. It islikely that the FSIM index has sufficient potential to be used as an index for the co-registration accuracy of SAR image.

Deep-learning-based GPR Data Interpretation Technique for Detecting Cavities in Urban Roads (도심지 도로 지하공동 탐지를 위한 딥러닝 기반 GPR 자료 해석 기법)

  • Byunghoon, Choi;Sukjoon, Pyun;Woochang, Choi;Churl-hyun, Jo;Jinsung, Yoon
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.189-200
    • /
    • 2022
  • Ground subsidence on urban roads is a social issue that can lead to human and property damages. Therefore, it is crucial to detect underground cavities in advance and repair them. Underground cavity detection is mainly performed using ground penetrating radar (GPR) surveys. This process is time-consuming, as a massive amount of GPR data needs to be interpreted, and the results vary depending on the skills and subjectivity of experts. To address these problems, researchers have studied automation and quantification techniques for GPR data interpretation, and recent studies have focused on deep learning-based interpretation techniques. In this study, we described a hyperbolic event detection process based on deep learning for GPR data interpretation. To demonstrate this process, we implemented a series of algorithms introduced in the preexisting research step by step. First, a deep learning-based YOLOv3 object detection model was applied to automatically detect hyperbolic signals. Subsequently, only hyperbolic signals were extracted using the column-connection clustering (C3) algorithm. Finally, the horizontal locations of the underground cavities were determined using regression analysis. The hyperbolic event detection using the YOLOv3 object detection technique achieved 84% precision and a recall score of 92% based on AP50. The predicted horizontal locations of the four underground cavities were approximately 0.12 ~ 0.36 m away from their actual locations. Thus, we confirmed that the existing deep learning-based interpretation technique is reliable with regard to detecting the hyperbolic patterns indicating underground cavities.

Development of Deep Learning Model for Detecting Road Cracks Based on Drone Image Data (드론 촬영 이미지 데이터를 기반으로 한 도로 균열 탐지 딥러닝 모델 개발)

  • Young-Ju Kwon;Sung-ho Mun
    • Land and Housing Review
    • /
    • v.14 no.2
    • /
    • pp.125-135
    • /
    • 2023
  • Drones are used in various fields, including land survey, transportation, forestry/agriculture, marine, environment, disaster prevention, water resources, cultural assets, and construction, as their industrial importance and market size have increased. In this study, image data for deep learning was collected using a mavic3 drone capturing images at a shooting altitude was 20 m with ×7 magnification. Swin Transformer and UperNet were employed as the backbone and architecture of the deep learning model. About 800 sheets of labeled data were augmented to increase the amount of data. The learning process encompassed three rounds. The Cross-Entropy loss function was used in the first and second learning; the Tversky loss function was used in the third learning. In the future, when the crack detection model is advanced through convergence with the Internet of Things (IoT) through additional research, it will be possible to detect patching or potholes. In addition, it is expected that real-time detection tasks of drones can quickly secure the detection of pavement maintenance sections.

Analysis of the Abstract Structure in Scientific Papers by Gifted Students and Exploring the Possibilities of Artificial Intelligence Applied to the Educational Setting (과학 영재의 논문 초록 구조 분석 및 이에 대한 인공지능의 활용 가능성 탐색)

  • Bongwoo Lee;Hunkoog Jho
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.6
    • /
    • pp.573-582
    • /
    • 2023
  • This study aimed to explore the potential use of artificial intelligence in science education for gifted students by analyzing the structure of abstracts written by students at a gifted science academy and comparing the performance of various elements extracted using AI. The study involved an analysis of 263 graduation theses from S Science High School over five years (2017-2021), focusing on the frequency and types of background, objectives, methods, results, and discussions included in their abstracts. This was followed by an evaluation of their accuracy using AI classification methods with fine-tuning and prompts. The results revealed that the frequency of elements in the abstracts written by gifted students followed the order of objectives, methods, results, background, and discussions. However, only 57.4% of the abstracts contained all the essential elements, such as objectives, methods, and results. Among these elements, fine-tuned AI classification showed the highest accuracy, with background, objectives, and results demonstrating relatively high performance, while methods and discussions were often inaccurately classified. These findings suggest the need for a more effective use of AI, through providing a better distribution of elements or appropriate datasets for training. Educational implications of these findings were also discussed.

Standardization and Management of Interface Terminology regarding Chief Complaints, Diagnoses and Procedures for Electronic Medical Records: Experiences of a Four-hospital Consortium (전자의무기록 표준화 용어 관리 프로세스 정립)

  • Kang, Jae-Eun;Kim, Kidong;Lee, Young-Ae;Yoo, Sooyoung;Lee, Ho Young;Hong, Kyung Lan;Hwang, Woo Yeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.679-687
    • /
    • 2021
  • The purpose of the present study was to document the standardization and management process of interface terminology regarding the chief complaints, diagnoses, and procedures, including surgery in a four-hospital consortium. The process was proposed, discussed, modified, and finalized in 2016 by the Terminology Standardization Committee (TSC), consisting of personnel from four hospitals. A request regarding interface terminology was classified into one of four categories: 1) registration of a new term, 2) revision, 3) deleting an old term and registering a new term, and 4) deletion. A request was processed in the following order: 1) collecting testimonies from related departments and 2) voting by the TSC. At least five out of the seven possible members of the voting pool need to approve of it. Mapping to the reference terminology was performed by three independent medical information managers. All processes were performed online, and the voting and mapping results were collected automatically. This process made the decision-making process clear and fast. In addition, this made users receptive to the decision of the TSC. In the 16 months after the process was adopted, there were 126 new terms registered, 131 revisions, 40 deletions of an old term and the registration of a new term, and 1235 deletions.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.

Analysis of Optimal Resolution and Number of GCP Chips for Precision Sensor Modeling Efficiency in Satellite Images (농림위성영상 정밀센서모델링 효율성 재고를 위한 최적의 해상도 및 지상기준점 칩 개수 분석)

  • Choi, Hyeon-Gyeong;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1445-1462
    • /
    • 2022
  • Compact Advanced Satellite 500-4 (CAS500-4), which is scheduled to be launched in 2025, is a mid-resolution satellite with a 5 m resolution developed for wide-area agriculture and forest observation. To utilize satellite images, it is important to establish a precision sensor model and establish accurate geometric information. Previous research reported that a precision sensor model could be automatically established through the process of matching ground control point (GCP) chips and satellite images. Therefore, to improve the geometric accuracy of satellite images, it is necessary to improve the GCP chip matching performance. This paper proposes an improved GCP chip matching scheme for improved precision sensor modeling of mid-resolution satellite images. When using high-resolution GCP chips for matching against mid-resolution satellite images, there are two major issues: handling the resolution difference between GCP chips and satellite images and finding the optimal quantity of GCP chips. To solve these issues, this study compared and analyzed chip matching performances according to various satellite image upsampling factors and various number of chips. RapidEye images with a resolution of 5m were used as mid-resolution satellite images. GCP chips were prepared from aerial orthographic images with a resolution of 0.25 m and satellite orthogonal images with a resolution of 0.5 m. Accuracy analysis was performed using manually extracted reference points. Experiment results show that upsampling factor of two and three significantly improved sensor model accuracy. They also show that the accuracy was maintained with reduced number of GCP chips of around 100. The results of the study confirmed the possibility of applying high-resolution GCP chips for automated precision sensor modeling of mid-resolution satellite images with improved accuracy. It is expected that the results of this study can be used to establish a precise sensor model for CAS500-4.

Incorporating Social Relationship discovered from User's Behavior into Collaborative Filtering (사용자 행동 기반의 사회적 관계를 결합한 사용자 협업적 여과 방법)

  • Thay, Setha;Ha, Inay;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.1-20
    • /
    • 2013
  • Nowadays, social network is a huge communication platform for providing people to connect with one another and to bring users together to share common interests, experiences, and their daily activities. Users spend hours per day in maintaining personal information and interacting with other people via posting, commenting, messaging, games, social events, and applications. Due to the growth of user's distributed information in social network, there is a great potential to utilize the social data to enhance the quality of recommender system. There are some researches focusing on social network analysis that investigate how social network can be used in recommendation domain. Among these researches, we are interested in taking advantages of the interaction between a user and others in social network that can be determined and known as social relationship. Furthermore, mostly user's decisions before purchasing some products depend on suggestion of people who have either the same preferences or closer relationship. For this reason, we believe that user's relationship in social network can provide an effective way to increase the quality in prediction user's interests of recommender system. Therefore, social relationship between users encountered from social network is a common factor to improve the way of predicting user's preferences in the conventional approach. Recommender system is dramatically increasing in popularity and currently being used by many e-commerce sites such as Amazon.com, Last.fm, eBay.com, etc. Collaborative filtering (CF) method is one of the essential and powerful techniques in recommender system for suggesting the appropriate items to user by learning user's preferences. CF method focuses on user data and generates automatic prediction about user's interests by gathering information from users who share similar background and preferences. Specifically, the intension of CF method is to find users who have similar preferences and to suggest target user items that were mostly preferred by those nearest neighbor users. There are two basic units that need to be considered by CF method, the user and the item. Each user needs to provide his rating value on items i.e. movies, products, books, etc to indicate their interests on those items. In addition, CF uses the user-rating matrix to find a group of users who have similar rating with target user. Then, it predicts unknown rating value for items that target user has not rated. Currently, CF has been successfully implemented in both information filtering and e-commerce applications. However, it remains some important challenges such as cold start, data sparsity, and scalability reflected on quality and accuracy of prediction. In order to overcome these challenges, many researchers have proposed various kinds of CF method such as hybrid CF, trust-based CF, social network-based CF, etc. In the purpose of improving the recommendation performance and prediction accuracy of standard CF, in this paper we propose a method which integrates traditional CF technique with social relationship between users discovered from user's behavior in social network i.e. Facebook. We identify user's relationship from behavior of user such as posts and comments interacted with friends in Facebook. We believe that social relationship implicitly inferred from user's behavior can be likely applied to compensate the limitation of conventional approach. Therefore, we extract posts and comments of each user by using Facebook Graph API and calculate feature score among each term to obtain feature vector for computing similarity of user. Then, we combine the result with similarity value computed using traditional CF technique. Finally, our system provides a list of recommended items according to neighbor users who have the biggest total similarity value to the target user. In order to verify and evaluate our proposed method we have performed an experiment on data collected from our Movies Rating System. Prediction accuracy evaluation is conducted to demonstrate how much our algorithm gives the correctness of recommendation to user in terms of MAE. Then, the evaluation of performance is made to show the effectiveness of our method in terms of precision, recall, and F1-measure. Evaluation on coverage is also included in our experiment to see the ability of generating recommendation. The experimental results show that our proposed method outperform and more accurate in suggesting items to users with better performance. The effectiveness of user's behavior in social network particularly shows the significant improvement by up to 6% on recommendation accuracy. Moreover, experiment of recommendation performance shows that incorporating social relationship observed from user's behavior into CF is beneficial and useful to generate recommendation with 7% improvement of performance compared with benchmark methods. Finally, we confirm that interaction between users in social network is able to enhance the accuracy and give better recommendation in conventional approach.