• Title/Summary/Keyword: A*알고리즘

Search Result 30,623, Processing Time 0.057 seconds

Development of the Risk Evaluation Model for Rear End Collision on the Basis of Microscopic Driving Behaviors (미시적 주행행태를 반영한 후미추돌위험 평가모형 개발)

  • Chung, Sung-Bong;Song, Ki-Han;Park, Chang-Ho;Chon, Kyung-Soo;Kho, Seung-Young
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.6
    • /
    • pp.133-144
    • /
    • 2004
  • A model and a measure which can evaluate the risk of rear end collision are developed. Most traffic accidents involve multiple causes such as the human factor, the vehicle factor, and the highway element at any given time. Thus, these factors should be considered in analyzing the risk of an accident and in developing safety models. Although most risky situations and accidents on the roads result from the poor response of a driver to various stimuli, many researchers have modeled the risk or accident by analyzing only the stimuli without considering the response of a driver. Hence, the reliabilities of those models turned out to be low. Thus in developing the model behaviors of a driver, such as reaction time and deceleration rate, are considered. In the past, most studies tried to analyze the relationships between a risk and an accident directly but they, due to the difficulty of finding out the directional relationships between these factors, developed a model by considering these factors, developed a model by considering indirect factors such as volume, speed, etc. However, if the relationships between risk and accidents are looked into in detail, it can be seen that they are linked by the behaviors of a driver, and depending on drivers the risk as it is on the road-vehicle system may be ignored or call drivers' attention. Therefore, an accident depends on how a driver handles risk, so that the more related risk to and accident occurrence is not the risk itself but the risk responded by a driver. Thus, in this study, the behaviors of a driver are considered in the model and to reflect these behaviors three concepts related to accidents are introduced. And safe stopping distance and accident occurrence probability were used for better understanding and for more reliable modeling of the risk. The index which can represent the risk is also developed based on measures used in evaluating noise level, and for the risk comparison between various situations, the equivalent risk level, considering the intensity and duration time, is developed by means of the weighted average. Validation is performed with field surveys on the expressway of Seoul, and the test vehicle was made to collect the traffic flow data, such as deceleration rate, speed and spacing. Based on this data, the risk by section, lane and traffic flow conditions are evaluated and compared with the accident data and traffic conditions. The evaluated risk level corresponds closely to the patterns of actual traffic conditions and counts of accident. The model and the method developed in this study can be applied to various fields, such as safety test of traffic flow, establishment of operation & management strategy for reliable traffic flow, and the safety test for the control algorithm in the advanced safety vehicles and many others.

Validation of Surface Reflectance Product of KOMPSAT-3A Image Data: Application of RadCalNet Baotou (BTCN) Data (다목적실용위성 3A 영상 자료의 지표 반사도 성과 검증: RadCalNet Baotou(BTCN) 자료 적용 사례)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1509-1521
    • /
    • 2020
  • Experiments for validation of surface reflectance produced by Korea Multi-Purpose Satellite (KOMPSAT-3A) were conducted using Chinese Baotou (BTCN) data among four sites of the Radical Calibration Network (RadCalNet), a portal that provides spectrophotometric reflectance measurements. The atmosphere reflectance and surface reflectance products were generated using an extension program of an open-source Orfeo ToolBox (OTB), which was redesigned and implemented to extract those reflectance products in batches. Three image data sets of 2016, 2017, and 2018 were taken into account of the two sensor model variability, ver. 1.4 released in 2017 and ver. 1.5 in 2019, such as gain and offset applied to the absolute atmospheric correction. The results of applying these sensor model variables showed that the reflectance products by ver. 1.4 were relatively well-matched with RadCalNet BTCN data, compared to ones by ver. 1.5. On the other hand, the reflectance products obtained from the Landsat-8 by the USGS LaSRC algorithm and Sentinel-2B images using the SNAP Sen2Cor program were used to quantitatively verify the differences in those of KOMPSAT-3A. Based on the RadCalNet BTCN data, the differences between the surface reflectance of KOMPSAT-3A image were shown to be highly consistent with B band as -0.031 to 0.034, G band as -0.001 to 0.055, R band as -0.072 to 0.037, and NIR band as -0.060 to 0.022. The surface reflectance of KOMPSAT-3A also indicated the accuracy level for further applications, compared to those of Landsat-8 and Sentinel-2B images. The results of this study are meaningful in confirming the applicability of Analysis Ready Data (ARD) to the surface reflectance on high-resolution satellites.

Development of the Filterable Water Sampler System for eDNA Filtering and Performance Evaluation of the System through eDNA Monitoring at Catchment Conduit Intake-Reservoir (eDNA 포집용 채수 필터시스템 개발과 집수매거 취수지 내에서의 성능평가)

  • Kwak, Tae-Soo;Kim, Won-Seok;Lee, Sun Ho;Kwak, Ihn-Sil
    • Korean Journal of Ecology and Environment
    • /
    • v.54 no.4
    • /
    • pp.272-279
    • /
    • 2021
  • A pump-type eDNA filtering system that can control voltage and hydraulic pressure respectively has been developed, and applied a filter case that can filter out without damaging the filter. The filtering performance of the developed system was evaluated by comparing the eDNA concentration with the conventional vacuum-pressured filtering method at the catchment conduit intake reservoir. The developed system was divided into a voltage control (manual pump system) method and a pressure control (automatic pump system) method, and the pressure was measured during filtering and the pressure change of each system was compared. The voltage control method started with 65 [KPa] at the beginning of the filtering, and as the filtering time elapsed, the amount of filtrate accumulated in the filter increased, so the pressure gradually increased. As a result of controlling the pressure control method to maintain a constant pressure according to the designed algorithm, there was a difference in the width of the hydraulic pressure fluctuation during the filtering process according to the feedback time of the hydraulic pressure sensor, and it was confirmed that the pressure was converged to the target pressure. The filtering performance of the developed system was confirmed by measuring the eDNA concentration and comparing the voltage control method and the hydraulic control method with the control group. The voltage control method obtained similar results to the control group, but the hydraulic control method showed lower results than the control group. It is considered that the low eDNA concentration in the hydraulic control method is due to the large pressure deviation during filtering and maintaining a constant pressure during the filtering process. Therefore, rather than maintaining a constant pressure during filtering, it was confirmed that a voltage control method in which the pressure is gradually increased as the filtrate increases with the lapse of filtering time is suitable for collecting eDNA. As a result of comparing the average concentration of eDNA in lentic zone and lotic zone as a control group, it was found to be 96.2 [ng µL-1] and 88.4 [ng µL-1l], respectively. The result of comparing the average concentration of eDNA by the pump method was also high in the lentic zone sample as 90.7 [ng µL-1] and 74.8 [ng µL-1] in the lentic zone and the lotic zone, respectively. The high eDNA concentration in the lentic zone is thought to be due to the influence of microorganisms including the remaining eDNA.

Analysis of Genetics Problem-Solving Processes of High School Students with Different Learning Approaches (학습접근방식에 따른 고등학생들의 유전 문제 해결 과정 분석)

  • Lee, Shinyoung;Byun, Taejin
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.4
    • /
    • pp.385-398
    • /
    • 2020
  • This study aims to examine genetics problem-solving processes of high school students with different learning approaches. Two second graders in high school participated in a task that required solving the complicated pedigree problem. The participants had similar academic achievements in life science but one had a deep learning approach while the other had a surface learning approach. In order to analyze in depth the students' problem-solving processes, each student's problem-solving process was video-recorded, and each student conducted a think-aloud interview after solving the problem. Although students showed similar errors at the first trial in solving the problem, they showed different problem-solving process at the last trial. Student A who had a deep learning approach voluntarily solved the problem three times and demonstrated correct conceptual framing to the three constraints using rule-based reasoning in the last trial. Student A monitored the consistency between the data and her own pedigree, and reflected the problem-solving process in the check phase of the last trial in solving the problem. Student A's problem-solving process in the third trial resembled a successful problem-solving algorithm. However, student B who had a surface learning approach, involuntarily repeated solving the problem twice, and focused and used only part of the data due to her goal-oriented attitude to solve the problem in seeking for answers. Student B showed incorrect conceptual framing by memory-bank or arbitrary reasoning, and maintained her incorrect conceptual framing to the constraints in two problem-solving processes. These findings can help in understanding the problem-solving processes of students who have different learning approaches, allowing teachers to better support students with difficulties in accessing genetics problems.

Analysis and Performance Evaluation of Pattern Condensing Techniques used in Representative Pattern Mining (대표 패턴 마이닝에 활용되는 패턴 압축 기법들에 대한 분석 및 성능 평가)

  • Lee, Gang-In;Yun, Un-Il
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.77-83
    • /
    • 2015
  • Frequent pattern mining, which is one of the major areas actively studied in data mining, is a method for extracting useful pattern information hidden from large data sets or databases. Moreover, frequent pattern mining approaches have been actively employed in a variety of application fields because the results obtained from them can allow us to analyze various, important characteristics within databases more easily and automatically. However, traditional frequent pattern mining methods, which simply extract all of the possible frequent patterns such that each of their support values is not smaller than a user-given minimum support threshold, have the following problems. First, traditional approaches have to generate a numerous number of patterns according to the features of a given database and the degree of threshold settings, and the number can also increase in geometrical progression. In addition, such works also cause waste of runtime and memory resources. Furthermore, the pattern results excessively generated from the methods also lead to troubles of pattern analysis for the mining results. In order to solve such issues of previous traditional frequent pattern mining approaches, the concept of representative pattern mining and its various related works have been proposed. In contrast to the traditional ones that find all the possible frequent patterns from databases, representative pattern mining approaches selectively extract a smaller number of patterns that represent general frequent patterns. In this paper, we describe details and characteristics of pattern condensing techniques that consider the maximality or closure property of generated frequent patterns, and conduct comparison and analysis for the techniques. Given a frequent pattern, satisfying the maximality for the pattern signifies that all of the possible super sets of the pattern must have smaller support values than a user-specific minimum support threshold; meanwhile, satisfying the closure property for the pattern means that there is no superset of which the support is equal to that of the pattern with respect to all the possible super sets. By mining maximal frequent patterns or closed frequent ones, we can achieve effective pattern compression and also perform mining operations with much smaller time and space resources. In addition, compressed patterns can be converted into the original frequent pattern forms again if necessary; especially, the closed frequent pattern notation has the ability to convert representative patterns into the original ones again without any information loss. That is, we can obtain a complete set of original frequent patterns from closed frequent ones. Although the maximal frequent pattern notation does not guarantee a complete recovery rate in the process of pattern conversion, it has an advantage that can extract a smaller number of representative patterns more quickly compared to the closed frequent pattern notation. In this paper, we show the performance results and characteristics of the aforementioned techniques in terms of pattern generation, runtime, and memory usage by conducting performance evaluation with respect to various real data sets collected from the real world. For more exact comparison, we also employ the algorithms implementing these techniques on the same platform and Implementation level.

A Dynamic Prefetch Filtering Schemes to Enhance Usefulness Of Cache Memory (캐시 메모리의 유용성을 높이는 동적 선인출 필터링 기법)

  • Chon Young-Suk;Lee Byung-Kwon;Lee Chun-Hee;Kim Suk-Il;Jeon Joong-Nam
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.123-136
    • /
    • 2006
  • The prefetching technique is an effective way to reduce the latency caused memory access. However, excessively aggressive prefetch not only leads to cache pollution so as to cancel out the benefits of prefetch but also increase bus traffic leading to overall performance degradation. In this thesis, a prefetch filtering scheme is proposed which dynamically decides whether to commence prefetching by referring a filtering table to reduce the cache pollution due to unnecessary prefetches In this thesis, First, prefetch hashing table 1bitSC filtering scheme(PHT1bSC) has been shown to analyze the lock problem of the conventional scheme, this scheme such as conventional scheme used to be N:1 mapping, but it has the two state to 1bit value of each entries. A complete block address table filtering scheme(CBAT) has been introduced to be used as a reference for the comparative study. A prefetch block address lookup table scheme(PBALT) has been proposed as the main idea of this paper which exhibits the most exact filtering performance. This scheme has a length of the table the same as the PHT1bSC scheme, the contents of each entry have the fields the same as CBAT scheme recently, never referenced data block address has been 1:1 mapping a entry of the filter table. On commonly used prefetch schemes and general benchmarks and multimedia programs simulates change cache parameters. The PBALT scheme compared with no filtering has shown enhanced the greatest 22%, the cache miss ratio has been decreased by 7.9% by virtue of enhanced filtering accuracy compared with conventional PHT2bSC. The MADT of the proposed PBALT scheme has been decreased by 6.1% compared with conventional schemes to reduce the total execution time.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

Analysis of Significance between SWMM Computer Simulation and Artificial Rainfall on Rainfall Runoff Delay Effects of Vegetation Unit-type LID System (식생유니트형 LID 시스템의 우수유출 지연효과에 대한 SWMM 전산모의와 인공강우 모니터링 간의 유의성 분석)

  • Kim, Tae-Han;Choi, Boo-Hun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.48 no.3
    • /
    • pp.34-44
    • /
    • 2020
  • In order to suggest performance analysis directions of ecological components based on a vegetation-based LID system model, this study seeks to analyze the statistical significance between monitoring results by using SWMM computer simulation and rainfall and run-off simulation devices and provide basic data required for a preliminary system design. Also, the study aims to comprehensively review a vegetation-based LID system's soil, a vegetation model, and analysis plans, which were less addressed in previous studies, and suggest a performance quantification direction that could act as a substitute device-type LID system. After monitoring artificial rainfall for 40 minutes, the test group zone and the control group zone recorded maximum rainfall intensity of 142.91mm/hr. (n=3, sd=0.34) and 142.24mm/hr. (n=3, sd=0.90), respectively. Compared to a hyetograph, low rainfall intensity was re-produced in 10-minute and 50-minute sections, and high rainfall intensity was confirmed in 20-minute, 30-minute, and 40-minute sections. As for rainwater run-off delay effects, run-off intensity in the test group zone was reduced by 79.8% as it recorded 0.46mm/min at the 50-minute point when the run-off intensity was highest in the control group zone. In the case of computer simulation, run-off intensity in the test group zone was reduced by 99.1% as it recorded 0.05mm/min at the 50-minute point when the run-off intensity was highest. The maximum rainfall run-off intensity in the test group zone (Dv=30.35, NSE=0.36) recorded 0.77mm/min and 1.06mm/min in artificial rainfall monitoring and SWMM computer simulation, respectively, at the 70-minute point in both cases. Likewise, the control group zone (Dv=17.27, NSE=0.78) recorded 2.26mm/min and 2.38mm/min, respectively, at the 50-minutes point. Through statistical assessing the significance between the rainfall & run-off simulating systems and the SWMM computer simulations, this study was able to suggest a preliminary design direction for the rainwater run-off reduction performance of the LID system applied with single vegetation. Also, by comprehensively examining the LID system's soil and vegetation models, and analysis methods, this study was able to compile parameter quantification plans for vegetation and soil sectors that can be aligned with a preliminary design. However, physical variables were caused by the use of a single vegetation-based LID system, and follow-up studies are required on algorithms for calibrating the statistical significance between monitoring and computer simulation results.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute $L_2$ Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.