• Title/Summary/Keyword: Four-network model

Search Result 548, Processing Time 0.033 seconds

Calibration of Portable Particulate Mattere-Monitoring Device using Web Query and Machine Learning

  • Loh, Byoung Gook;Choi, Gi Heung
    • Safety and Health at Work
    • /
    • v.10 no.4
    • /
    • pp.452-460
    • /
    • 2019
  • Background: Monitoring and control of PM2.5 are being recognized as key to address health issues attributed to PM2.5. Availability of low-cost PM2.5 sensors made it possible to introduce a number of portable PM2.5 monitors based on light scattering to the consumer market at an affordable price. Accuracy of light scatteringe-based PM2.5 monitors significantly depends on the method of calibration. Static calibration curve is used as the most popular calibration method for low-cost PM2.5 sensors particularly because of ease of application. Drawback in this approach is, however, the lack of accuracy. Methods: This study discussed the calibration of a low-cost PM2.5-monitoring device (PMD) to improve the accuracy and reliability for practical use. The proposed method is based on construction of the PM2.5 sensor network using Message Queuing Telemetry Transport (MQTT) protocol and web query of reference measurement data available at government-authorized PM monitoring station (GAMS) in the republic of Korea. Four machine learning (ML) algorithms such as support vector machine, k-nearest neighbors, random forest, and extreme gradient boosting were used as regression models to calibrate the PMD measurements of PM2.5. Performance of each ML algorithm was evaluated using stratified K-fold cross-validation, and a linear regression model was used as a reference. Results: Based on the performance of ML algorithms used, regression of the output of the PMD to PM2.5 concentrations data available from the GAMS through web query was effective. The extreme gradient boosting algorithm showed the best performance with a mean coefficient of determination (R2) of 0.78 and standard error of 5.0 ㎍/㎥, corresponding to 8% increase in R2 and 12% decrease in root mean square error in comparison with the linear regression model. Minimum 100 hours of calibration period was found required to calibrate the PMD to its full capacity. Calibration method proposed poses a limitation on the location of the PMD being in the vicinity of the GAMS. As the number of the PMD participating in the sensor network increases, however, calibrated PMDs can be used as reference devices to nearby PMDs that require calibration, forming a calibration chain through MQTT protocol. Conclusions: Calibration of a low-cost PMD, which is based on construction of PM2.5 sensor network using MQTT protocol and web query of reference measurement data available at a GAMS, significantly improves the accuracy and reliability of a PMD, thereby making practical use of the low-cost PMD possible.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Exploring Sweepstakes Marketing Strategies in Facebook Brand Fan Pages (페이스북 브랜드 팬 페이지의 경품 이벤트 마케팅 전략에 관한 탐색적 연구)

  • Choi, Yoon-Jin;Jeon, Byeong-Jin;Kim, Hee-Woong
    • The Journal of Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-23
    • /
    • 2017
  • Purpose Facebook is a social network service that has the highest number of Monthly Active Users around the world. Hence, marketers have selected Facebook as the most important platform to get customer engagement. With respect to the customer engagement enhancement, the most popular and engaging post type in the Facebook brand fan pages related to what was usually classified as 'sweepstakes'. Sweepstakes refer to a form of gambling where the entire prize may be awarded to the winner. Which makes customers more engaged with the brand. This study aims to explore sweepstakes-oriented social media marketing approaches based on the application of big data analytics. Design/methodology/approach we collect sweepstakes data from each company based on the data crawling from the Facebook brand fan pages. The output of this study explains how companies in each category of FCB grid can design and apply sweepstakes for their social media marketing. Findings The results show that they have one thing in common across the four quadrants of FCB grid. Regardless of the quadrants, most frequently observed type is 'Simple/Quiz or Comments/Quatrains [event type of sweepstakes] + Gifticon [type of reward prize] + Image [type of message display] + No URL [Link toother website] +Single-Gift-Offer [type of reward prize payment]'. So, if the position of the brand is hard to be defined by the FCB grid model, then this general rule can be applied to all types of brands. Also some differences between the quadrants of the FCB grid were observed. This study offers several research implications by analyzing Sweepstakes-oriented social media marketing approaches in Facebook brand fan pages. By using the FCB grid model, this study provides guidance on how companies can design their sweepstakes-oriented social media marketing approaches in the context of Facebook brand fan pages by considering their context.

Development of Image Defect Detection Model Using Machine Learning (기계 학습을 활용한 이미지 결함 검출 모델 개발)

  • Lee, Nam-Yeong;Cho, Hyug-Hyun;Ceong, Hyi-Thaek
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.3
    • /
    • pp.513-520
    • /
    • 2020
  • Recently, the development of a vision inspection system using machine learning has become more active. This study seeks to develop a defect inspection model using machine learning. Defect detection problems for images correspond to classification problems, which are the method of supervised learning in machine learning. In this study, defect detection models are developed based on algorithms that automatically extract features and algorithms that do not extract features. One-dimensional CNN and two-dimensional CNN are used as algorithms for automatic extraction of features, and MLP and SVM are used as algorithms for non-extracting features. A defect detection model is developed based on four models and their accuracy and AUC compare based on AUC. Although image classification is common in the development of models using CNN, high accuracy and AUC is achieved when developing SVM models by converting pixels from images into RGB values in this study.

Model Verification Algorithm for ATM Security System (ATM 보안 시스템을 위한 모델 인증 알고리즘)

  • Jeong, Heon;Lim, Chun-Hwan;Pyeon, Suk-Bum
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.72-78
    • /
    • 2000
  • In this study, we propose a model verification algorithm based on DCT and neural network for ATM security system. We construct database about facial images after capturing thirty persons facial images in the same lumination and distance. To simulate model verification, we capture four learning images and test images per a man. After detecting edge in facial images, we detect a characteristic area of square shape using edge distribution in facial images. Characteristic area contains eye bows, eyes, nose, mouth and cheek. We extract characteristic vectors to calculate diagonally coefficients sum after obtaining DCT coefficients about characteristic area. Characteristic vectors is normalized between +1 and -1, and then used for input vectors of neural networks. Not considering passwords, simulations results showed 100% verification rate when facial images were learned and 92% verification rate when facial images weren't learned. But considering passwords, the proposed algorithm showed 100% verification rate in case of two simulations.

  • PDF

A Study on the Blockchain-Based Insurance Fraud Prediction Model Using Machine Learning (기계학습을 이용한 블록체인 기반의 보험사기 예측 모델 연구)

  • Lee, YongJoo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.6
    • /
    • pp.270-281
    • /
    • 2021
  • With the development of information technology, the size of insurance fraud is increasing rapidly every year, and the method is being organized and advanced in conspiracy. Although various forms of prediction models are being studied to predict and detect this, insurance-related information is highly sensitive, which poses a high risk of sharing and access and has many legal or technical constraints. In this paper, we propose a machine learning insurance fraud prediction model based on blockchain, one of the most popular technologies with the recent advent of the Fourth Industrial Revolution. We utilize blockchain technology to realize a safe and trusted insurance information sharing system, apply the theory of social relationship analysis for more efficient and accurate fraud prediction, and propose machine learning fraud prediction patterns in four stages. Claims with high probability of fraud have the effect of being detected at a higher prediction rate at an earlier stage, and claims with low probability are applied differentially for post-reference management. The core mechanism of the proposed model has been verified by constructing an Ethereum local network, requiring more sophisticated performance evaluations in the future.

Representative Batch Normalization for Scene Text Recognition

  • Sun, Yajie;Cao, Xiaoling;Sun, Yingying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.7
    • /
    • pp.2390-2406
    • /
    • 2022
  • Scene text recognition has important application value and attracted the interest of plenty of researchers. At present, many methods have achieved good results, but most of the existing approaches attempt to improve the performance of scene text recognition from the image level. They have a good effect on reading regular scene texts. However, there are still many obstacles to recognizing text on low-quality images such as curved, occlusion, and blur. This exacerbates the difficulty of feature extraction because the image quality is uneven. In addition, the results of model testing are highly dependent on training data, so there is still room for improvement in scene text recognition methods. In this work, we present a natural scene text recognizer to improve the recognition performance from the feature level, which contains feature representation and feature enhancement. In terms of feature representation, we propose an efficient feature extractor combined with Representative Batch Normalization and ResNet. It reduces the dependence of the model on training data and improves the feature representation ability of different instances. In terms of feature enhancement, we use a feature enhancement network to expand the receptive field of feature maps, so that feature maps contain rich feature information. Enhanced feature representation capability helps to improve the recognition performance of the model. We conducted experiments on 7 benchmarks, which shows that this method is highly competitive in recognizing both regular and irregular texts. The method achieved top1 recognition accuracy on four benchmarks of IC03, IC13, IC15, and SVTP.

Analyzing the Performance of the South Korean Men's National Football Team Using Social Network Analysis: Focusing on the Manager Bento's Matches (사회연결망분석을 활용한 한국 남자축구대표팀 경기성과 분석: 벤투 감독 경기를 중심으로)

  • Yeonsik Jung;Eunkyung Kang;Sung-Byung Yang
    • Knowledge Management Research
    • /
    • v.24 no.2
    • /
    • pp.241-262
    • /
    • 2023
  • The phenomena and game records that occur in sports matches are being analyzed in the field of sports game analysis, utilizing advanced technologies and various scientific analysis methods. Among these methods, social network analysis is actively employed in analyzing pass networks. As football is a representative sport in which the game unfolds through player interactions, efforts are being made to provide new insights into the game using social network analysis, which were previously unattainable. Consequently, this study aims to analyze the changes in pass networks over time for a specific football team and compare them in different scenarios, including variations in the game's nature (Qatar World Cup games vs. A match games) and alterations in the opposing team (higher FIFA rankers vs. lower FIFA rankers). To elaborate, we selected ten matches from the games of the Korean national football team following Coach Bento's appointment, extracted network indicators for these matches, and applied four indicators (efficiency, cohesion, vulnerability, and activity/leadership) from a football team's performance evaluation model to the extracted data for analysis under different circumstances. The research findings revealed a significant increase in cohesion and a substantial decrease in vulnerability during the analysis of game performance over time. In the comparative analysis based on changes in the game's nature, Qatar World Cup matches exhibited superior performance across all aspects of the evaluation model compared to A matches. Lastly, in the comparative analysis considering the variations in the opposing team, matches against lower FIFA rankers displayed superior performance in all aspects of the evaluation model in comparison to matches against top FIFA rankers. We hope that the outcomes of this study can serve as essential foundational data for the selection of football team coaches and the development of game strategies, thereby contributing to the enhancement of the team's performance.

A Study on Geoid Model Development Method in Philipphines (필리핀 지오이드모델의 개발방안 연구)

  • Lee, Suk-Bae;Pena, Bonifasio Dela
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.6
    • /
    • pp.699-710
    • /
    • 2009
  • If a country has her geoid model, it could be determine accurate orthometric height because the geoid model could provide continuous equi-gravity potential surface. And it is possible to improve the coordinates accuracy of national control points through geodetic network adjustment considering geoidal heights. This study aims to find the best way to develop geoid model in Philippines which have similar topographic conditions as like Malaysia and Indonesia in Eastsouth asia. So, in this study, it is surveyed the general theories of geoid determination and development cases of geoid model in Asia and it is computed that the geoidal heights and gravity anomalies by spherical harmonic analysis using EGM2008, the latest earth geopotential model. The results show that first, the development of gravimetric geoid model based on airborne gravimetry is needed and second, about 200 GPS surveying data at national benchmark is needed. It is concluded that it is the most reasonable way to develop the hybrid geoid model through fitting geometric geoid by GPS/leveling data to gravimetric geoid. Also, it is proposed that four band spherical Fast fourier transformation(FFT) method for evaluation of Stokes integration and remove and restore technique using EGM2008 and SRTM for calculation of gravimetric geoid model and least square collocation algorithm for calculation of hybrid geoid model.

Link Budget and Performance Analysis of UWB Transmission Method for Off-body HDR Communication in WBAN System (WBAN에서 신체 외 고속통신을 위한 UWB 전송 방식의 링크버짓 및 성능 분석)

  • Choi, Nack-Hyun;Hwang, Jae-Ho;Jang, Sung-Jeen;Kim, Jae-Moung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.1
    • /
    • pp.53-64
    • /
    • 2009
  • For a realization of the ubiquitous society, applying IT to vehicle industry has recently been an attractive issue to make wireless communication in body area network possible to everywhere. In this paper, we propose the physical layer symbol structure based on PPM scheme of the IEEE 802.15.4a for the off-body high data rate WBAN system. We propose four symbol structures which is classified according to the number of the chip and whether the channel coding is used or not. We calculate the required SNR through the link budget calculation and the recently proposed off-body WBAN channel environment was applied in the simulation. The results of four systems show that small number of burst's chip enhances the performance and the system is capable to achieve the data rate of 10 Mbps.

  • PDF