• Title/Summary/Keyword: Semantic Location Model

Search Result 23, Processing Time 0.026 seconds

Association-Based Knowledge Model for Supporting Diagnosis of a Capsule Endoscopy (캡슐내시경 검사의 진단 보조를 위한 연관성 기반 지식 모델)

  • Hwang, Gyubon;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.10
    • /
    • pp.493-498
    • /
    • 2017
  • Capsule endoscopy is specialized for the observation of small intestine that is difficult to access by general endoscopy. The diagnostic procedure through capsule endoscopy consists of three stages: examination of indicant, endoscopy, and diagnosis. At this time, key information needed for diagnosis includes indicant, lesions, and suspected disease information. In this paper, these information are defined as semantic features and the extracting process is defined as semantic-based analysis. It is performed in whole capsule endoscopy. First, several symptoms of patient are checked before capsule endoscopy to get some information on suspected disease. Next, capsule endoscopy is performed by checking the suspected diseases. Finally, diagnosis is concluded by using supporting information. At this time, some association are used to conclude diagnosis. For example, there are the disease association between the symptom and the disease to identify the expected disease, and the anatomical association between the location of the lesion and supporting information. However, existing knowledge models such as MST and CEST only lists the simple term related to endoscopy and cannot consider such semantic associations. Therefore, in this paper, we propose association-based knowledge model for supporting diagnosis of capsule endoscopy. The proposed model is divided into two; a disease model and anatomical model of small intestine, interesting area(organs) of capsule endoscopy. It can effectively support diagnosis by providing key information for capsule endoscopy.

LSTM-based Model for Effective Sensor Filtering in Sensor Registry System (센서 레지스트리 시스템에서 효율적인 센서 필터링을 위한 LSTM 기반 모델)

  • Chen, Haotian;Jung, Hyunjun;Lee, Sukhoon;On, Byung-Won;Jeong, Dongwon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.12-14
    • /
    • 2021
  • A sensor registry system (SRS) provides semantic metadata about a sensor based on location information of a mobile device in order to solve a problem of interoperability between a sensor and a device. However, if the GPS of the mobile device is incorrectly received, the SRS receives incorrect sensor information and has a problem in that it cannot connect with the sensor. This paper proposes a dual collaboration strategy based on geographical embedding and LSTM-based path prediction to improve the probability of successful requests between mobile devices and sensors to address this problem and evaluate with the Monte Carlo approach. Through experiments, it was shown that the proposed method can compensate for location abnormalities and is an effective multicasting mechanism.

  • PDF

Change Detection Using Deep Learning Based Semantic Segmentation for Nuclear Activity Detection and Monitoring (핵 활동 탐지 및 감시를 위한 딥러닝 기반 의미론적 분할을 활용한 변화 탐지)

  • Song, Ahram;Lee, Changhui;Lee, Jinmin;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.991-1005
    • /
    • 2022
  • Satellite imaging is an effective supplementary data source for detecting and verifying nuclear activity. It is also highly beneficial in regions with limited access and information, such as nuclear installations. Time series analysis, in particular, can identify the process of preparing for the conduction of a nuclear experiment, such as relocating equipment or changing facilities. Differences in the semantic segmentation findings of time series photos were employed in this work to detect changes in meaningful items connected to nuclear activity. Building, road, and small object datasets made of KOMPSAT 3/3A photos given by AIHub were used to train deep learning models such as U-Net, PSPNet, and Attention U-Net. To pick relevant models for targets, many model parameters were adjusted. The final change detection was carried out by including object information into the first change detection, which was obtained as the difference in semantic segmentation findings. The experiment findings demonstrated that the suggested approach could effectively identify altered pixels. Although the suggested approach is dependent on the accuracy of semantic segmentation findings, it is envisaged that as the dataset for the region of interest grows in the future, so will the relevant scope of the proposed method.

A Technique for Extracting GeoSemantic Knowledge from Micro-blog (마이크로 블로그기반의 공간 지식 추출 기법연구)

  • Ha, Su-Wook;Nam, Kwang-Woo;Ryu, Keun-Ho
    • Spatial Information Research
    • /
    • v.20 no.2
    • /
    • pp.129-136
    • /
    • 2012
  • Recently international organizations such as ISO/TC211, OGC, INSPIRE (Infrastructure for Spatial Information in Europe) make an effort to share geospatial data using semantic web technologies. In addition, smart phone and social networking services enable community-based opportunities for participants to share issues of a social phenomenon based on geographic area, and many researchers try to find a method of extracting issues from that. However, serviceable spatial ontologies are still insufficient at application level, and studies of spatial information extraction from SNS were focused on user's location finding or geocoding by text mining. Therefore, a study of extracting spatial phenomenon from social media information and converting it into geosemantic knowledge is very usable. In this paper, we propose a framework for extracting keywords from micro-blog, one of the social media services, finding their relationships using data mining technique, and converting it into spatiotemopral knowledge. The result of this study could be used for implementing a related system as a procedure and ontology model for constructing geoseem antic issue. And from this, it is expected to improve the effectiveness of finding, publishing and analysing spatial issues.

Comparison of Semantic Segmentation Performance of U-Net according to the Ratio of Small Objects for Nuclear Activity Monitoring (핵활동 모니터링을 위한 소형객체 비율에 따른 U-Net의 의미론적 분할 성능 비교)

  • Lee, Jinmin;Kim, Taeheon;Lee, Changhui;Lee, Hyunjin;Song, Ahram;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_4
    • /
    • pp.1925-1934
    • /
    • 2022
  • Monitoring nuclear activity for inaccessible areas using remote sensing technology is essential for nuclear non-proliferation. In recent years, deep learning has been actively used to detect nuclear-activity-related small objects. However, high-resolution satellite imagery containing small objects can result in class imbalance. As a result, there is a performance degradation problem in detecting small objects. Therefore, this study aims to improve detection accuracy by analyzing the effect of the ratio of small objects related to nuclear activity in the input data for the performance of the deep learning model. To this end, six case datasets with different ratios of small object pixels were generated and a U-Net model was trained for each case. Following that, each trained model was evaluated quantitatively and qualitatively using a test dataset containing various types of small object classes. The results of this study confirm that when the ratio of object pixels in the input image is adjusted, small objects related to nuclear activity can be detected efficiently. This study suggests that the performance of deep learning can be improved by adjusting the object pixel ratio of input data in the training dataset.

Object-Oriented Retrieval Framework to Construct the Reuse-Supporting Systems (재사용 시스템 개발을 위한 객체지향 검식 프레임워크)

  • Kim, Jung-A;Moon, Chung-Ryeal;Kim, Seung-Tae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.5
    • /
    • pp.711-720
    • /
    • 1995
  • This paper describes in object-oriented retrieval framework that is generally designed to store and retrieve the reusable components from the library regardless of the underlying representation of the library. We propose a retrieval framework on visual space so that reuser can identify their location at the library without any previous information of library structure. They can decide the directions of retrieval with the results displayed on the visual space and interact with the library using the defined simple retrieval operation that can assess the library information object. For doing this, 4I model was proposed. Librarian as well as reuser can easily construct the new library on the visual environment. It is the process to give the semantic of the information object. This paper discusses the basic concepts of our 4I model and explains each constituent of our model and shows a simple example of the system.

  • PDF

A MVC Framework for Visualizing Text Data (텍스트 데이터 시각화를 위한 MVC 프레임워크)

  • Choi, Kwang Sun;Jeong, Kyo Sung;Kim, Soo Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.39-58
    • /
    • 2014
  • As the importance of big data and related technologies continues to grow in the industry, it has become highlighted to visualize results of processing and analyzing big data. Visualization of data delivers people effectiveness and clarity for understanding the result of analyzing. By the way, visualization has a role as the GUI (Graphical User Interface) that supports communications between people and analysis systems. Usually to make development and maintenance easier, these GUI parts should be loosely coupled from the parts of processing and analyzing data. And also to implement a loosely coupled architecture, it is necessary to adopt design patterns such as MVC (Model-View-Controller) which is designed for minimizing coupling between UI part and data processing part. On the other hand, big data can be classified as structured data and unstructured data. The visualization of structured data is relatively easy to unstructured data. For all that, as it has been spread out that the people utilize and analyze unstructured data, they usually develop the visualization system only for each project to overcome the limitation traditional visualization system for structured data. Furthermore, for text data which covers a huge part of unstructured data, visualization of data is more difficult. It results from the complexity of technology for analyzing text data as like linguistic analysis, text mining, social network analysis, and so on. And also those technologies are not standardized. This situation makes it more difficult to reuse the visualization system of a project to other projects. We assume that the reason is lack of commonality design of visualization system considering to expanse it to other system. In our research, we suggest a common information model for visualizing text data and propose a comprehensive and reusable framework, TexVizu, for visualizing text data. At first, we survey representative researches in text visualization era. And also we identify common elements for text visualization and common patterns among various cases of its. And then we review and analyze elements and patterns with three different viewpoints as structural viewpoint, interactive viewpoint, and semantic viewpoint. And then we design an integrated model of text data which represent elements for visualization. The structural viewpoint is for identifying structural element from various text documents as like title, author, body, and so on. The interactive viewpoint is for identifying the types of relations and interactions between text documents as like post, comment, reply and so on. The semantic viewpoint is for identifying semantic elements which extracted from analyzing text data linguistically and are represented as tags for classifying types of entity as like people, place or location, time, event and so on. After then we extract and choose common requirements for visualizing text data. The requirements are categorized as four types which are structure information, content information, relation information, trend information. Each type of requirements comprised with required visualization techniques, data and goal (what to know). These requirements are common and key requirement for design a framework which keep that a visualization system are loosely coupled from data processing or analyzing system. Finally we designed a common text visualization framework, TexVizu which is reusable and expansible for various visualization projects by collaborating with various Text Data Loader and Analytical Text Data Visualizer via common interfaces as like ITextDataLoader and IATDProvider. And also TexVisu is comprised with Analytical Text Data Model, Analytical Text Data Storage and Analytical Text Data Controller. In this framework, external components are the specifications of required interfaces for collaborating with this framework. As an experiment, we also adopt this framework into two text visualization systems as like a social opinion mining system and an online news analysis system.

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.

Deep Learning Approach for Automatic Discontinuity Mapping on 3D Model of Tunnel Face (터널 막장 3차원 지형모델 상에서의 불연속면 자동 매핑을 위한 딥러닝 기법 적용 방안)

  • Chuyen Pham;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.508-518
    • /
    • 2023
  • This paper presents a new approach for the automatic mapping of discontinuities in a tunnel face based on its 3D digital model reconstructed by LiDAR scan or photogrammetry techniques. The main idea revolves around the identification of discontinuity areas in the 3D digital model of a tunnel face by segmenting its 2D projected images using a deep-learning semantic segmentation model called U-Net. The proposed deep learning model integrates various features including the projected RGB image, depth map image, and local surface properties-based images i.e., normal vector and curvature images to effectively segment areas of discontinuity in the images. Subsequently, the segmentation results are projected back onto the 3D model using depth maps and projection matrices to obtain an accurate representation of the location and extent of discontinuities within the 3D space. The performance of the segmentation model is evaluated by comparing the segmented results with their corresponding ground truths, which demonstrates the high accuracy of segmentation results with the intersection-over-union metric of approximately 0.8. Despite still being limited in training data, this method exhibits promising potential to address the limitations of conventional approaches, which only rely on normal vectors and unsupervised machine learning algorithms for grouping points in the 3D model into distinct sets of discontinuities.

Classification of Industrial Parks and Quarries Using U-Net from KOMPSAT-3/3A Imagery (KOMPSAT-3/3A 영상으로부터 U-Net을 이용한 산업단지와 채석장 분류)

  • Che-Won Park;Hyung-Sup Jung;Won-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang;Moung-Jin Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1679-1692
    • /
    • 2023
  • South Korea is a country that emits a large amount of pollutants as a result of population growth and industrial development and is also severely affected by transboundary air pollution due to its geographical location. As pollutants from both domestic and foreign sources contribute to air pollution in Korea, the location of air pollutant emission sources is crucial for understanding the movement and distribution of pollutants in the atmosphere and establishing national-level air pollution management and response strategies. Based on this background, this study aims to effectively acquire spatial information on domestic and international air pollutant emission sources, which is essential for analyzing air pollution status, by utilizing high-resolution optical satellite images and deep learning-based image segmentation models. In particular, industrial parks and quarries, which have been evaluated as contributing significantly to transboundary air pollution, were selected as the main research subjects, and images of these areas from multi-purpose satellites 3 and 3A were collected, preprocessed, and converted into input and label data for model training. As a result of training the U-Net model using this data, the overall accuracy of 0.8484 and mean Intersection over Union (mIoU) of 0.6490 were achieved, and the predicted maps showed significant results in extracting object boundaries more accurately than the label data created by course annotations.