• Title/Summary/Keyword: coverage accuracy

Search Result 215, Processing Time 0.024 seconds

A new scheme for finding the biggest rectangle that doesn't have any obstacle (장애물을 제외한 가장 큰 공간을 찾는 기법)

  • Hwang, Jung-Hwan;Jeon, Heung-Seok
    • The KIPS Transactions:PartA
    • /
    • v.18A no.2
    • /
    • pp.75-80
    • /
    • 2011
  • Recently, many cleaning robots have been made with various algorithms for efficient cleaning. One of them is a DmaxCoverage algorithm which efficiently clean for the situation when the robot has a time limit. This algorithm uses Rectangle Tiling method for finding the biggest rectangle that doesn't have any obstacle. When the robot uses grid map, Rectangle Tiling method can find the optimal value. Rectangle Tiling method is to find all of the rectangles in the grid map. But when the grid map is big, it has a problem that spends a lot of times because of the large numbers of rectangles. In this paper, we propose Four Direction Rectangle Scanning(FDRS) method that has similar accuracy but faster than Rectangle Tiling method. FDRS method is not to find all of the rectangle, but to search the obstacle's all directions. We will show the FDRS method's performance by comparing of FDRS and Rectangle Tiling methods.

Learning Rules for Identifying Hypernyms in Machine Readable Dictionaries (기계가독형사전에서 상위어 판별을 위한 규칙 학습)

  • Choi Seon-Hwa;Park Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.171-178
    • /
    • 2006
  • Most approaches for extracting hypernyms of a noun from its definitions in an MRD rely on lexical patterns compiled by human experts. Not only these approaches require high cost for compiling lexical patterns but also it is very difficult for human experts to compile a set of lexical patterns with a broad-coverage because in natural languages there are various expressions which represent same concept. To alleviate these problems, this paper proposes a new method for extracting hypernyms of a noun from its definitions in an MRD. In proposed approach, we use only syntactic (part-of-speech) patterns instead of lexical patterns in identifying hypernyms to reduce the number of patterns with keeping their coverage broad. Our experiment has shown that the classification accuracy of the proposed method is 92.37% which is significantly much better than that of previous approaches.

Detection of Innate and Artificial Mitochondrial DNA Heteroplasmy by Massively Parallel Sequencing: Considerations for Analysis

  • Kim, Moon-Young;Cho, Sohee;Lee, Ji Hyun;Seo, Hee Jin;Lee, Soong Deok
    • Journal of Korean Medical Science
    • /
    • v.33 no.52
    • /
    • pp.337.1-337.14
    • /
    • 2018
  • Background: Mitochondrial heteroplasmy, the co-existence of different mitochondrial polymorphisms within an individual, has various forensic and clinical implications. But there is still no guideline on the application of massively parallel sequencing (MPS) in heteroplasmy detection. We present here some critical issues that should be considered in heteroplasmy studies using MPS. Methods: Among five samples with known innate heteroplasmies, two pairs of mixture were generated for artificial heteroplasmies with target minor allele frequencies (MAFs) ranging from 50% to 1%. Each sample was amplified by two-amplicon method and sequenced by Ion Torrent system. The outcomes of two different analysis tools, Torrent Suite Variant Caller (TVC) and mtDNA-Server (mDS), were compared. Results: All the innate heteroplasmies were detected correctly by both analysis tools. Average MAFs of artificial heteroplasmies correlated well to the target values. The detection rates were almost 90% for high-level heteroplasmies, but decreased for low-level heteroplasmies. TVC generally showed lower detection rates than mDS, which seems to be due to their own computation algorithms which drop out some reference-dominant heteroplasmies. Meanwhile, mDS reported several unintended low-level heteroplasmies which were suggested as nuclear mitochondrial DNA sequences. The average coverage depth of each sample placed on the same chip showed considerable variation. The increase of coverage depth had no effect on the detection rates. Conclusion: In addition to the general accuracy of the MPS application on detecting heteroplasmy, our study indicates that the understanding of the nature of mitochondrial DNA and analysis algorithm would be crucial for appropriate interpretation of MPS results.

Machine Learning-based landslide susceptibility mapping - Inje area, South Korea

  • Chanul Choi;Le Xuan Hien;Seongcheon Kwon;Giha Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.248-248
    • /
    • 2023
  • In recent years, the number of landslides in Korea has been increasing due to extreme weather events such as localized heavy rainfall and typhoons. Landslides often occur with debris flows, land subsidence, and earthquakes. They cause significant damage to life and property. 64% of Korea's land area is made up of mountains, the government wanted to predict landslides to reduce damage. In response, the Korea Forest Service has established a 'Landslide Information System' to predict the likelihood of landslides. This system selects a total of 13 landslide factors based on past landslide events. Using the LR technique (Logistic Regression) to predict the possibility of a landslide occurrence and the accuracy is known to be 0.75. However, most of the data used for learning in the current system is on landslides that occurred from 2005 to 2011, and it does not reflect recent typhoons or heavy rain. Therefore, in this study, we will apply a total of six machine learning techniques (KNN, LR, SVM, XGB, RF, GNB) to predict the occurrence of landslides based on the data of Inje, Gangwon-do, which was recently produced by the National Institute of Forest. To predict the occurrence of landslides, it is necessary to process converting landslide events and factors data into a suitable form for machine learning techniques through ArcGIS and Python. In addition, there is a large difference in the number of data between areas where landslides occurred or not. Therefore, the prediction was performed after correcting the unbalanced data using Tomek Links and Near Miss techniques. Moreover, to control unbalanced data, a model that reflects soil properties will use to remove absolute safe areas.

  • PDF

Factors Affecting the Accuracy of Internet Survey (인터넷 여론조사의 정확도 관련요인)

  • Cho, Sung-Kyum;Joo, Young-Soo;Cho, Eun-Hee
    • Survey Research
    • /
    • v.6 no.2
    • /
    • pp.51-74
    • /
    • 2005
  • The internet survey methods have been more and more widely used as the coverage of the fixed-line telephone is being reduced due to the diffusion of mobile phone. So, there is a need to know the accuracy of this new survey method. This study aims to estimate the accuracy of the internet survey method and identify the factors affecting the accuracy of this method, For this purpose, we analyzed the election poll data during the 17th general election period. These data include fixed-line telephone survey data, internet survey data, mobile phone survey data and the election voting data. The analysis shows that the prediction errors of the internet survey were a little more than those of the telephone or mobile phone survey. But the differences are not significant. It follows from this result that we can use the internet survey method in social survey context. This study also found that the respondent's willingness to participate in the survey, the probability of being at home during survey and the respondent's educational level were affecting the accuracy of the internet survey. Further studies to develop weighting method with these factors are needed.

  • PDF

Feasibility Study on Integration of SSR Correction into Network RTK to Provide More Robust Service

  • Lim, Cheol-Soon;Park, Byungwoon;Kim, Dong-Uk;Kee, Chang-Don;Park, Kwan-Dong;Seo, Seungwoo;So, Hyoungmin;Park, Junpyo
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.7 no.4
    • /
    • pp.295-305
    • /
    • 2018
  • Network RTK is a highly practical technology that can provide high positioning accuracy at levels between cm~dm regardless of user location in the network by extending the available range of RTK using reference station network. In particular, unlike other carrier-based positioning techniques such as PPP, users are able to acquire high-accuracy positions within a short initialization time of a few or tens of seconds, which increases its value as a future navigation system. However, corrections must be continuously received to maintain a high level of positioning accuracy, and when a time delay of more than 30 seconds occurs, the accuracy may be reduced to the code-based positioning level of meters. In case of SSR, which is currently in the process of standardization for PPP service, the corrections by each error source are transmitted in different transmission intervals, and the rate of change of each correction is transmitted together to compensate the time delay. Using these features of SSR correction is expected to reduce the performance degradation even if users do not receive the network RTK corrections for more than 30 seconds. In this paper, the simulation data were generated from 5 domestic reference stations in Gunwi, Yeongdoek, Daegu, Gimcheon, and Yecheon, and the network RTK and SSR corrections were generated for the corresponding data and applied to the simulation data from Cheongsong reference station, assumed as the user. As a result of the experiment assuming 30 seconds of missing data, the positioning performance compensating for time delay by SSR was analyzed to be horizontal RMS (about 5 cm) and vertical RMS (about 8 cm), and the 95% error was 8.7 cm horizontal and 1cm vertical. This is a significant amount when compared to the horizontal and vertical RMS of 0.3 cm and 0.6 cm, respectively, for Network RTK without time delay for the same data, but is considerably smaller compared to the 0.5 ~ 1 m accuracy level of DGPS or SBAS. Therefore, maintaining Network RTK mode using SSR rather than switching to code-based DGPS or SBAS mode due to failure to receive the network RTK corrections for 30 seconds is considered to be favorable in terms of maintaining position accuracy and recovering performance by quickly resolving the integer ambiguity when the communication channel is recovered.

Development of a Classification Method for Forest Vegetation on the Stand Level, Using KOMPSAT-3A Imagery and Land Coverage Map (KOMPSAT-3A 위성영상과 토지피복도를 활용한 산림식생의 임상 분류법 개발)

  • Song, Ji-Yong;Jeong, Jong-Chul;Lee, Peter Sang-Hoon
    • Korean Journal of Environment and Ecology
    • /
    • v.32 no.6
    • /
    • pp.686-697
    • /
    • 2018
  • Due to the advance in remote sensing technology, it has become easier to more frequently obtain high resolution imagery to detect delicate changes in an extensive area, particularly including forest which is not readily sub-classified. Time-series analysis on high resolution images requires to collect extensive amount of ground truth data. In this study, the potential of land coverage mapas ground truth data was tested in classifying high-resolution imagery. The study site was Wonju-si at Gangwon-do, South Korea, having a mix of urban and natural areas. KOMPSAT-3A imagery taken on March 2015 and land coverage map published in 2017 were used as source data. Two pixel-based classification algorithms, Support Vector Machine (SVM) and Random Forest (RF), were selected for the analysis. Forest only classification was compared with that of the whole study area except wetland. Confusion matrixes from the classification presented that overall accuracies for both the targets were higher in RF algorithm than in SVM. While the overall accuracy in the forest only analysis by RF algorithm was higher by 18.3% than SVM, in the case of the whole region analysis, the difference was relatively smaller by 5.5%. For the SVM algorithm, adding the Majority analysis process indicated a marginal improvement of about 1% than the normal SVM analysis. It was found that the RF algorithm was more effective to identify the broad-leaved forest within the forest, but for the other classes the SVM algorithm was more effective. As the two pixel-based classification algorithms were tested here, it is expected that future classification will improve the overall accuracy and the reliability by introducing a time-series analysis and an object-based algorithm. It is considered that this approach will contribute to improving a large-scale land planning by providing an effective land classification method on higher spatial and temporal scales.

Development of the Simulation Tool to Predict a Coverage of the R-Mode System (지상파 통합항법 서비스의 성능예측 시뮬레이션 툴 개발)

  • Son, Pyo-Woong;Han, Younghoon;Lee, Sangheon;Park, Sanghyun
    • Journal of Navigation and Port Research
    • /
    • v.43 no.6
    • /
    • pp.429-436
    • /
    • 2019
  • The eLoran system is considered the best alternative because the vulnerability of satellite navigation systems cannot be resolved as perfect. Thus, South Korea is in the process of establishing a testbed of the eLoran system in the West Sea. To provide resilient navigation services to all waters, additional eLoran transmitters are required. However, it is difficult to establish eLoran transmitters because of various practical reasons. Instead, the positioning with NDGNSS/AIS source can expand the coverage and its algorithm with applying continuous waves is under development. Using the already operating NDGNSS reference station and the AIS base station, it is possible to operate the navigation system with higher accuracy than before. Thus, it is crucial to predict the performance when each system is integrated. In this paper, we have developed a simulation tool that can predict the performance of terrestrial integrated navigation system using the eLoran system, maritime NDGNSS station and the AIS station. The esitmated phase error of the received signal is calculated with the Cramer-Rao Lower Bound factoring the transmission power and the atmospheric noise according to the transmission frequency distributed by the ITU. Additionally, the simulation results are more accurate by estimating the annual mean atmospheric noise of the 300 kHz signal through the DGPS signal information collected from the maritime NDGNSS station. This approach can further increase the reliability of simulation results.

Simplified Design of Commercial Pipes with Considering Secondary Losses (부차 손실을 고려한 상용관로의 간편 설계)

  • Yu, Dong-Hun;Jeong, Won-Guk
    • Journal of Korea Water Resources Association
    • /
    • v.34 no.1
    • /
    • pp.31-43
    • /
    • 2001
  • The friction factor of commercial pipe varies with wide range depending on pipe type and pipe size. Various methods can describe the wide variation of friction factor with good accuracy, but they normally require an iteration process even for solution of a simple case. Power law can result in an explicit form of solver so that the power law is rigorously employed for the development of direct solution technique. The parameters used in the present form of power law are allowed to haute some variation with pipe size and Reynolds number as well as pipe type for wider coverage with good accuracy, while Hazen-Williams equation permits limited variation which accounts only for the roughness or the pipe type. Furthermore secondary loss is considered in the development of explicit equations for design of commercial pipes.

  • PDF

Automated Signature Sharing to Enhance the Coverage of Zero-day Attacks (제로데이 공격 대응력 향상을 위한 시그니처 자동 공유 방안)

  • Kim, Sung-Ki;Jang, Jong-Soo;Min, Byoung-Joon
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.4
    • /
    • pp.255-262
    • /
    • 2010
  • Recently, automated signature generation systems(ASGSs) have been developed in order to cope with zero-day attacks with malicious codes exploiting vulnerabilities which are not yet publically noticed. To enhance the usefulness of the signatures generated by (ASGSs) it is essential to identify signatures only with the high accuracy of intrusion detection among a number of generated signatures and to provide them to target security systems in a timely manner. This automated signature exchange, distribution, and update operations have to be performed in a secure and universal manner beyond the border of network administrations, and also should be able to eliminate the noise in a signature set which causes performance degradation of the security systems. In this paper, we present a system architecture to support the identification of high quality signatures and to share them among security systems through a scheme which can evaluate the detection accuracy of individual signatures, and also propose a set of algorithms dealing with exchanging, distributing and updating signatures. Though the experiment on a test-bed, we have confirmed that the high quality signatures are automatically saved at the level that the noise rate of a signature set is reduced. The system architecture and the algorithm proposed in the paper can be adopted to a automated signature sharing framework.