• Title/Summary/Keyword: Curve network

Search Result 442, Processing Time 0.025 seconds

Study on Improvement of Weil Pairing IBE for Secret Document Distribution (기밀문서유통을 위한 Weil Pairing IBE 개선 연구)

  • Choi, Cheong-Hyeon
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.59-71
    • /
    • 2012
  • PKI-based public key scheme is outstanding in terms of authenticity and privacy. Nevertheless its application brings big burden due to the certificate/key management. It is difficult to apply it to limited computing devices in WSN because of its high encryption complexity. The Bilinear Pairing emerged from the original IBE to eliminate the certificate, is a future significant cryptosystem as based on the DDH(Decisional DH) algorithm which is significant in terms of computation and secure enough for authentication, as well as secure and faster. The practical EC Weil Pairing presents that its encryption algorithm is simple and it satisfies IND/NM security constraints against CCA. The Random Oracle Model based IBE PKG is appropriate to the structure of our target system with one secret file server in the operational perspective. Our work proposes modification of the Weil Pairing as proper to the closed network for secret file distribution[2]. First we proposed the improved one computing both encryption and message/user authentication as fast as O(DES) level, in which our scheme satisfies privacy, authenticity and integrity. Secondly as using the public key ID as effective as PKI, our improved IBE variant reduces the key exposure risk.

Well Log Analysis using Intelligent Reservoir Characterization (지능형 저류층 특성화 기법을 이용한 물리검층 자료 해석)

  • Lim Song-Se
    • Geophysics and Geophysical Exploration
    • /
    • v.7 no.2
    • /
    • pp.109-116
    • /
    • 2004
  • Petroleum reservoir characterization is a process for quantitatively describing various reservoir properties in spatial variability using all the available field data. Porosity and permeability are the two fundamental reservoir properties which relate to the amount of fluid contained in a reservoir and its ability to flow. These properties have a significant impact on petroleum fields operations and reservoir management. In un-cored intervals and well of heterogeneous formation, porosity and permeability estimation from conventional well logs has a difficult and complex problem to solve by conventional statistical methods. This paper suggests an intelligent technique using fuzzy logic and neural network to determine reservoir properties from well logs. Fuzzy curve analysis based on fuzzy logics is used for selecting the best related well logs with core porosity and permeability data. Neural network is used as a nonlinear regression method to develop transformation between the selected well logs and core analysis data. The intelligent technique is demonstrated with an application to the well data in offshore Korea. The results show that this technique can make more accurate and reliable properties estimation compared with previously used methods. The intelligent technique can be utilized a powerful tool for reservoir characterization from well logs in oil and natural gas development projects.

Calibration of Portable Particulate Mattere-Monitoring Device using Web Query and Machine Learning

  • Loh, Byoung Gook;Choi, Gi Heung
    • Safety and Health at Work
    • /
    • v.10 no.4
    • /
    • pp.452-460
    • /
    • 2019
  • Background: Monitoring and control of PM2.5 are being recognized as key to address health issues attributed to PM2.5. Availability of low-cost PM2.5 sensors made it possible to introduce a number of portable PM2.5 monitors based on light scattering to the consumer market at an affordable price. Accuracy of light scatteringe-based PM2.5 monitors significantly depends on the method of calibration. Static calibration curve is used as the most popular calibration method for low-cost PM2.5 sensors particularly because of ease of application. Drawback in this approach is, however, the lack of accuracy. Methods: This study discussed the calibration of a low-cost PM2.5-monitoring device (PMD) to improve the accuracy and reliability for practical use. The proposed method is based on construction of the PM2.5 sensor network using Message Queuing Telemetry Transport (MQTT) protocol and web query of reference measurement data available at government-authorized PM monitoring station (GAMS) in the republic of Korea. Four machine learning (ML) algorithms such as support vector machine, k-nearest neighbors, random forest, and extreme gradient boosting were used as regression models to calibrate the PMD measurements of PM2.5. Performance of each ML algorithm was evaluated using stratified K-fold cross-validation, and a linear regression model was used as a reference. Results: Based on the performance of ML algorithms used, regression of the output of the PMD to PM2.5 concentrations data available from the GAMS through web query was effective. The extreme gradient boosting algorithm showed the best performance with a mean coefficient of determination (R2) of 0.78 and standard error of 5.0 ㎍/㎥, corresponding to 8% increase in R2 and 12% decrease in root mean square error in comparison with the linear regression model. Minimum 100 hours of calibration period was found required to calibrate the PMD to its full capacity. Calibration method proposed poses a limitation on the location of the PMD being in the vicinity of the GAMS. As the number of the PMD participating in the sensor network increases, however, calibrated PMDs can be used as reference devices to nearby PMDs that require calibration, forming a calibration chain through MQTT protocol. Conclusions: Calibration of a low-cost PMD, which is based on construction of PM2.5 sensor network using MQTT protocol and web query of reference measurement data available at a GAMS, significantly improves the accuracy and reliability of a PMD, thereby making practical use of the low-cost PMD possible.

A Travel Time Prediction Model under Incidents (돌발상황하의 교통망 통행시간 예측모형)

  • Jang, Won-Jae
    • Journal of Korean Society of Transportation
    • /
    • v.29 no.1
    • /
    • pp.71-79
    • /
    • 2011
  • Traditionally, a dynamic network model is considered as a tool for solving real-time traffic problems. One of useful and practical ways of using such models is to use it to produce and disseminate forecast travel time information so that the travelers can switch their routes from congested to less-congested or uncongested, which can enhance the performance of the network. This approach seems to be promising when the traffic congestion is severe, especially when sudden incidents happen. A consideration that should be given in implementing this method is that travel time information may affect the future traffic condition itself, creating undesirable side effects such as the over-reaction problem. Furthermore incorrect forecast travel time can make the information unreliable. In this paper, a network-wide travel time prediction model under incidents is developed. The model assumes that all drivers have access to detailed traffic information through personalized in-vehicle devices such as car navigation systems. Drivers are assumed to make their own travel choice based on the travel time information provided. A route-based stochastic variational inequality is formulated, which is used as a basic model for the travel time prediction. A diversion function is introduced to account for the motorists' willingness to divert. An inverse function of the diversion curve is derived to develop a variational inequality formulation for the travel time prediction model. Computational results illustrate the characteristics of the proposed model.

Development of spatial dependence formula of FORGEX method using rainfall data in Korea (우리나라 강우 자료를 이용한 FORGEX 기법의 공간상관식 개발)

  • Kim, Sunghun;Ahn, Hyunjun;Shin, Hongjoon;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.12
    • /
    • pp.1007-1014
    • /
    • 2016
  • The FORGEX (Focused Rainfall Growth Extension) method was developed to estimate rainfall quantiles in the United Kingdom. This method does not need any regional grouping and can estimate rainfall quantiles with relatively long return period. The spatial dependence formula (ln $N_e$) was derived to consider the distance from growth curve of proper population to the distributed network maximum (netmax) data using the UK rainfall data. For this reason, there is an inaccurate problem in rainfall quantiles when this formula is applied in Korea. In this study, the new formula was derived in order to improve such shortcomings using rainfall data of 64 sites from the Korea Meteorological Administration (KMA). A 42-year period (1973~2014) was taken as the reference period from rainfall data, then the formula was derived using three parameters such as rainfall duration, number of site, area of network. Then the new formula was applied to the FORGEX method for regional rainfall frequency analysis. In addition, rainfall quantiles were compared with those from the UK formula. As a result, the new formula shows more accurate results than the UK formula, in which the FORGEX method by the UK formula underestimates rainfall quantiles. Finally, the new improved formula may estimate accurate rainfall quantiles for long return period.

Landslide Susceptibility Prediction using Evidential Belief Function, Weight of Evidence and Artificial Neural Network Models (Evidential Belief Function, Weight of Evidence 및 Artificial Neural Network 모델을 이용한 산사태 공간 취약성 예측 연구)

  • Lee, Saro;Oh, Hyun-Joo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.299-316
    • /
    • 2019
  • The purpose of this study was to analyze landslide susceptibility in the Pyeongchang area using Weight of Evidence (WOE) and Evidential Belief Function (EBF) as probability models and Artificial Neural Networks (ANN) as a machine learning model in a geographic information system (GIS). This study examined the widespread shallow landslides triggered by heavy rainfall during Typhoon Ewiniar in 2006, which caused serious property damage and significant loss of life. For the landslide susceptibility mapping, 3,955 landslide occurrences were detected using aerial photographs, and environmental spatial data such as terrain, geology, soil, forest, and land use were collected and constructed in a spatial database. Seventeen factors that could affect landsliding were extracted from the spatial database. All landslides were randomly separated into two datasets, a training set (50%) and validation set (50%), to establish and validate the EBF, WOE, and ANN models. According to the validation results of the area under the curve (AUC) method, the accuracy was 74.73%, 75.03%, and 70.87% for WOE, EBF, and ANN, respectively. The EBF model had the highest accuracy. However, all models had predictive accuracy exceeding 70%, the level that is effective for landslide susceptibility mapping. These models can be applied to predict landslide susceptibility in an area where landslides have not occurred previously based on the relationships between landslide and environmental factors. This susceptibility map can help reduce landslide risk, provide guidance for policy and land use development, and save time and expense for landslide hazard prevention. In the future, more generalized models should be developed by applying landslide susceptibility mapping in various areas.

Trend Analysis of Research Related to Personality of University Students Through Network Analysis (네트워크 분석을 통한 대학생 인성 관련 연구의 동향 분석)

  • Kim, Sei-Kyung
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.47-56
    • /
    • 2021
  • The purpose of this study is to use network analysis to identify trends in university personality-related studies and provide implications for future research directions. For the purpose of this study, 194 papers related to personality of university students published in Korean scholarly journals. First, research began to be published in 2004, slightly increased in 2012, continued an upward curve from 2015, peaked in 2017, and is confirmed to be a downward trend. Second, the main keywords with the centrality analysis were 'society' and 'cultivation'. Third, keywords on the cognitive side and individual dimension of personality in the first period (2004 - 2010), social dimension and emotional side of personality in the second period (2011-2015), and social level and cognitive, emotional, and behavioral aspects of personality in the third period (2016-2020). Fourth, Topic 2 consisted of keywords of ability, life, interpersonal, satisfaction, and adaptation, and Topic 1 consisted of competence, morality, citizens, society, and practice. Fifth, Topic 4 alone in the first period, in the order of Topic 1 and Topic 2 in the second period, and in the order of Topic 2 and Topic 1 in the third period.

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

Development of A Network loading model for Dynamic traffic Assignment (동적 통행배정모형을 위한 교통류 부하모형의 개발)

  • 임강원
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.149-158
    • /
    • 2002
  • For the purpose of preciously describing real time traffic pattern in urban road network, dynamic network loading(DNL) models able to simulate traffic behavior are required. A number of different methods are available, including macroscopic, microscopic dynamic network models, as well as analytical model. Equivalency minimization problem and Variation inequality problem are the analytical models, which include explicit mathematical travel cost function for describing traffic behaviors on the network. While microscopic simulation models move vehicles according to behavioral car-following and cell-transmission. However, DNL models embedding such travel time function have some limitations ; analytical model has lacking of describing traffic characteristics such as relations between flow and speed, between speed and density Microscopic simulation models are the most detailed and realistic, but they are difficult to calibrate and may not be the most practical tools for large-scale networks. To cope with such problems, this paper develops a new DNL model appropriate for dynamic traffic assignment(DTA), The model is combined with vertical queue model representing vehicles as vertical queues at the end of links. In order to compare and to assess the model, we use a contrived example network. From the numerical results, we found that the DNL model presented in the paper were able to describe traffic characteristics with reasonable amount of computing time. The model also showed good relationship between travel time and traffic flow and expressed the feature of backward turn at near capacity.

Comparative Study of Machine learning Techniques for Spammer Detection in Social Bookmarking Systems (소셜 복마킹 시스템의 스패머 탐지를 위한 기계학습 기술의 성능 비교)

  • Kim, Chan-Ju;Hwang, Kyu-Baek
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.5
    • /
    • pp.345-349
    • /
    • 2009
  • Social bookmarking systems are a typical web 2.0 service based on folksonomy, providing the platform for storing and sharing bookmarking information. Spammers in social bookmarking systems denote the users who abuse the system for their own interests in an improper way. They can make the entire resources in social bookmarking systems useless by posting lots of wrong information. Hence, it is important to detect spammers as early as possible and protect social bookmarking systems from their attack. In this paper, we applied a diverse set of machine learning approaches, i.e., decision tables, decision trees (ID3), $na{\ddot{i}}ve$ Bayes classifiers, TAN (tree-augment $na{\ddot{i}}ve$ Bayes) classifiers, and artificial neural networks to this task. In our experiments, $na{\ddot{i}}ve$ Bayes classifiers performed significantly better than other methods with respect to the AUC (area under the ROC curve) score as veil as the model building time. Plausible explanations for this result are as follows. First, $na{\ddot{i}}ve$> Bayes classifiers art known to usually perform better than decision trees in terms of the AUC score. Second, the spammer detection problem in our experiments is likely to be linearly separable.