• Title/Summary/Keyword: 매칭기법

Search Result 762, Processing Time 0.03 seconds

Freeway Crash Frequency Model Development Based on the Road Section Segmentation by Using Vehicle Speeds (차량 속도를 이용한 도로 구간분할에 따른 고속도로 사고빈도 모형 개발 연구)

  • Hwang, Gyeong-Seong;Choe, Jae-Seong;Kim, Sang-Yeop;Heo, Tae-Yeong;Jo, Won-Beom;Kim, Yong-Seok
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.2
    • /
    • pp.151-159
    • /
    • 2010
  • This paper presents a research result that was performed to develop a more accurate freeway crash prediction model than existing models. While the existing crash models only focus on developing crash relationships associated with highway geometric conditions found on a short section of a crash site, this research applies a different approach considering the upstream highway geometric conditions as well. Theoretically, crashes occur while motorists are in motion, and particularly at freeways vehicle speed at one specific point is very sensitive to upstream geometric conditions. Therefore, this is a reasonable approach. To form the analysis data base, this research gathers the geometric conditions of the West Seaside Freeway 269.3 km and six years crash data ranging 2003-2008 for these freeway sections. As a result, it is found that crashes fit well into Negative Binomial Distribution, and, based on the developed model, total number of crashes is inversely proportional to highway curve length and radius. Contrarily, crash occurrences are proportional to tangent length. This result is different from existing crash study results, and it seems to be resulted from this research assumption that a crash is influenced greatly by upstream geometric conditions. Also, this research provides the expected effects on crash occurrences of the length of downgrade sections, speed camera placements, and the on- and off- ramp presences. It is expected that this research result is useful for doing more reasonable highway designs and safety audit analysis, and applying the same research approach to national roads and other major roads in urban areas is recommended.

Track Models Generation Based on Spatial Image Contents for Railway Route Management (철도노선관리에서의 공간 영상콘텐츠 기반의 궤적 모델 생성)

  • Yeon, Sang-Ho;Lee, Young-Wook
    • Proceedings of the KSR Conference
    • /
    • 2008.11b
    • /
    • pp.30-36
    • /
    • 2008
  • The Spatial Image contents of Geomorphology 3-D environment is focused by the requirement and importance in the fields such as, national land development plan, telecommunication facility management, railway construction, general construction engineering, Ubiquitous city development, safety and disaster prevention engineering. The currently used DEM system using contour lines, which embodies geographic information based on the 2-D digital maps and facility information has limitation in implementation in reproducing the 3-D spatial city. Moreover, this method often neglects the altitude of the rail way infrastructure which has narrow width and long length. There it is needed to apply laser measurement technique in the spatial target object to obtain accuracy. Currently, the LiDAR data which combines the laser measurement skill and GPS has been introduced to obtain high resolution accuracy in the altitude measurement. In this paper, we tested of the railway facilities using laser surveying system, then we propose data a generation of spatial images for the optimal manage and synthesis of railway facility system in our 3-D spatial terrain information. For this object, LiDAR based height data transformed to DEM, and the realtime unification of the vector via digital image mapping and raster via exactness evaluation is transformed to make it possible to trace the model of generated 3-dimensional railway model with long distance for 3D tract model generation. As the results, We confirmed the solutions of varieties application for railway facilities management using 3-D spatial image contents.

  • PDF

Overseas Address Data Quality Verification Technique using Artificial Intelligence Reflecting the Characteristics of Administrative System (국가별 행정체계 특성을 반영한 인공지능 활용 해외 주소데이터 품질검증 기법)

  • Jin-Sil Kim;Kyung-Hee Lee;Wan-Sup Cho
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.1-9
    • /
    • 2022
  • In the global era, the importance of imported food safety management is increasing. Address information of overseas food companies is key information for imported food safety management, and must be verified for prompt response and follow-up management in the event of a food risk. However, because each country's address system is different, one verification system cannot verify the addresses of all countries. Also, the purpose of address verification may be different depending on the field used. In this paper, we deal with the problem of classifying a given overseas food business address into the administrative district level of the country. This is because, in the event of harm to imported food, it is necessary to find the administrative district level from the address of the relevant company, and based on this trace the food distribution route or take measures to ban imports. However, in some countries the administrative district level name is omitted from the address, and the same place name is used repeatedly in several administrative district levels, so it is not easy to accurately classify the administrative district level from the address. In this study we propose a deep learning-based administrative district level classification model suitable for this case, and verify the actual address data of overseas food companies. Specifically, a method of training using a label powerset in a multi-label classification model is used. To verify the proposed method, the accuracy was verified for the addresses of overseas manufacturing companies in Ecuador and Vietnam registered with the Ministry of Food and Drug Safety, and the accuracy was improved by 28.1% and 13%, respectively, compared to the existing classification model.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

A Method of Reproducing the CCT of Natural Light using the Minimum Spectral Power Distribution for each Light Source of LED Lighting (LED 조명의 광원별 최소 분광분포를 사용하여 자연광 색온도를 재현하는 방법)

  • Yang-Soo Kim;Seung-Taek Oh;Jae-Hyun Lim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.19-26
    • /
    • 2023
  • Humans have adapted and evolved to natural light. However, as humans stay in indoor longer in modern times, the problem of biorhythm disturbance has been induced. To solve this problem, research is being conducted on lighting that reproduces the correlated color temperature(CCT) of natural light that varies from sunrise to sunset. In order to reproduce the CCT of natural light, multiple LED light sources with different CCTs are used to produce lighting, and then a control index DB is constructed by measuring and collecting the light characteristics of the combination of input currents for each light source in hundreds to thousands of steps, and then using it to control the lighting through the light characteristic matching method. The problem with this control method is that the more detailed the steps of the combination of input currents, the more time and economic costs are incurred. In this paper, an LED lighting control method that applies interpolation and combination calculation based on the minimum spectral power distribution information for each light source is proposed to reproduce the CCT of natural light. First, five minimum SPD information for each channel was measured and collected for the LED lighting, which consisted of light source channels with different CCTs and implemented input current control function of a 256-steps for each channel. Interpolation calculation was performed to generate SPD of 256 steps for each channel for the minimum SPD information, and SPD for all control combinations of LED lighting was generated through combination calculation of SPD for each channel. Illuminance and CCT were calculated through the generated SPD, a control index DB was constructed, and the CCT of natural light was reproduced through a matching technique. In the performance evaluation, the CCT for natural light was provided within the range of an average error rate of 0.18% while meeting the recommended indoor illumination standard.

A Wideband LNA and High-Q Bandpass Filter for Subsampling Direct Conversion Receivers (서브샘플링 직접변환 수신기용 광대역 증폭기 및 High-Q 대역통과 필터)

  • Park, Jeong-Min;Yun, Ji-Sook;Seo, Mi-Kyung;Han, Jung-Won;Choi, Boo-Young;Park, Sung-Min
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.11
    • /
    • pp.89-94
    • /
    • 2008
  • In this paper, a cascade of a wideband amplifier and a high-Q bandpass filter (BPF) has been realized in a 0.18mm CMOS technology for the applications of subsampling direct-conversion receivers. The wideband amplifier is designed to obtain the -3dB bandwidth of 5.4GHz, and the high-Q BPF is designed to select a 2.4GHz RF signal for the Bluetooth specifications. The measured results demonstrate 18.8dB power gain at 2.34GHz with 31MHz bandwidth, corresponding to the quality factor of 75. Also, it shows the noise figure (NF) of 8.6dB, and the broadband input matching (S11) of less than -12dB within the bandwidth. The whole chip dissipates 64.8mW from a single 1.8V supply and occupies the area of $1.0{\times}1.0mm2$.

Alternative Tracing Method for Moving Object Using Reference Template in Real-time Image - Focusing on Parking Management System (참조 템플릿 기반 실시간 이동체 영상을 이용한 대안적 탐지 방안 - 주차관리시스템을 대상으로)

  • Joo, Yong Jin;Kang, Lee Seul;Hahm, Chang Hahk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.5
    • /
    • pp.495-503
    • /
    • 2014
  • As the number of vehicles has been sharply increases, the significance of safety and effective operation issues in the parking lot is being emphasized, which takes a part of the transportation system. Recently, there have been several studies for the parking management by detecting moving object, however, recognizing numbers of fast-moving vehicles simultaneously in the picture is still a challenging problem. The parking lot in public area, or large-sized buildings has clear parking section, whereas the sensor system is configured to monitor a plurality of parking spaces. Therefore, by considering those parking lots, we suggested to develop the real-time parking availability information system by applying the real-time image processing techniques. with the help of template matching. Following the study, we wanted to provide the alternative method for parking management system through the reference template makers by recognizing movements of parked vehicles with the size and shape, regardless of direct detecting of driving movements. In addition, we evaluated the applicability and performances of the information system, presented in this study, and implemented a prototype system to simulate the parking statuses of each floor. In fat, it was possible to manage and analyze statistics about the total number of parking spaces and the number of vehicles parked through real-time video flames. We expected that the result of the study will be advanced, following the user-friendliness and cost reduction in operating parking management system and giving information by efficient analysis of parking situation.

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.

Elicitation of Collective Intelligence by Fuzzy Relational Methodology (퍼지관계 이론에 의한 집단지성의 도출)

  • Joo, Young-Do
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.17-35
    • /
    • 2011
  • The collective intelligence is a common-based production by the collaboration and competition of many peer individuals. In other words, it is the aggregation of individual intelligence to lead the wisdom of crowd. Recently, the utilization of the collective intelligence has become one of the emerging research areas, since it has been adopted as an important principle of web 2.0 to aim openness, sharing and participation. This paper introduces an approach to seek the collective intelligence by cognition of the relation and interaction among individual participants. It describes a methodology well-suited to evaluate individual intelligence in information retrieval and classification as an application field. The research investigates how to derive and represent such cognitive intelligence from individuals through the application of fuzzy relational theory to personal construct theory and knowledge grid technique. Crucial to this research is to implement formally and process interpretatively the cognitive knowledge of participants who makes the mutual relation and social interaction. What is needed is a technique to analyze cognitive intelligence structure in the form of Hasse diagram, which is an instantiation of this perceptive intelligence of human beings. The search for the collective intelligence requires a theory of similarity to deal with underlying problems; clustering of social subgroups of individuals through identification of individual intelligence and commonality among intelligence and then elicitation of collective intelligence to aggregate the congruence or sharing of all the participants of the entire group. Unlike standard approaches to similarity based on statistical techniques, the method presented employs a theory of fuzzy relational products with the related computational procedures to cover issues of similarity and dissimilarity.

Spherical Panorama Image Generation Method using Homography and Tracking Algorithm (호모그래피와 추적 알고리즘을 이용한 구면 파노라마 영상 생성 방법)

  • Munkhjargal, Anar;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.42-52
    • /
    • 2017
  • Panorama image is a single image obtained by combining images taken at several viewpoints through matching of corresponding points. Existing panoramic image generation methods that find the corresponding points are extracting local invariant feature points in each image to create descriptors and using descriptor matching algorithm. In the case of video sequence, frames may be a lot, so therefore it may costs significant amount of time to generate a panoramic image by the existing method and it may has done unnecessary calculations. In this paper, we propose a method to quickly create a single panoramic image from a video sequence. By assuming that there is no significant changes between frames of the video such as in locally, we use the FAST algorithm that has good repeatability and high-speed calculation to extract feature points and the Lucas-Kanade algorithm as each feature point to track for find the corresponding points in surrounding neighborhood instead of existing descriptor matching algorithms. When homographies are calculated for all images, homography is changed around the center image of video sequence to warp images and obtain a planar panoramic image. Finally, the spherical panoramic image is obtained by performing inverse transformation of the spherical coordinate system. The proposed method was confirmed through the experiments generating panorama image efficiently and more faster than the existing methods.