• Title/Summary/Keyword: Random Error

Search Result 1,000, Processing Time 0.027 seconds

Mapping Urban Inundation Using Flood Depth Extraction from Flood Map Image (침수지도 영상의 침수심 추출기법을 활용한 내수 침수 위험지도 작성)

  • Na, Seo Hyeon;Lee, Su Won;Kim, Joo Won;Byeon, Seong Joon
    • Journal of Korean Society of Water Science and Technology
    • /
    • v.26 no.6
    • /
    • pp.133-142
    • /
    • 2018
  • Increasing localized torrential rainfall caused by abnormal climate are making higher damage to human and property through urban inundation So The need of preventive measures is being highlighted. In this study, the methodology for calculating flood depth in domestic water map using an interpolation method in order to utilizing the results of flood analysis provided only in the form of a report is suggested. In the Incheon Metropolitan City S area as the test-bed, the flood depth was calculated using the interpolating the actual flood analysis by image and verification was performed. Verification results showed that the error rate was 5.2% for the maximum flooding depth, and that the water depth value was compared to 10 random points, which showed a difference of less than 0.030 m. Also, as the results of the flood analysis were presented in various ways, the flood depth was extracted from the image of the result of the flood analysis, which changed the presentation method, and then compared and analyzed. The results of this study could be available for the use of basic data from the research on the urban penetration of domestic consumption and for decision-making of policy.

On algorithm for finding primitive polynomials over GF(q) (GF(q)상의 원시다항식 생성에 관한 연구)

  • 최희봉;원동호
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.11 no.1
    • /
    • pp.35-42
    • /
    • 2001
  • The primitive polynomial on GF(q) is used in the area of the scrambler, the error correcting code and decode, the random generator and the cipher, etc. The algorithm that generates efficiently the primitive polynomial on GF(q) was proposed by A.D. Porto. The algorithm is a method that generates the sequence of the primitive polynomial by repeating to find another primitive polynomial with a known primitive polynomial. In this paper, we propose the algorithm that is improved in the A.D. Porto algorithm. The running rime of the A.D. Porto a1gorithm is O($\textrm{km}^2$), the running time of the improved algorithm is 0(m(m+k)). Here, k is gcd(k, $q^m$-1). When we find the primitive polynomial with m odor, it is efficient that we use the improved algorithm in the condition k, m>>1.

A Secure RFID Multi-Tag Search Protocol Without On-line Server (서버가 없는 환경에서 안전한 RFID 다중 태그 검색 프로토콜)

  • Lee, Jae-Dong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.3
    • /
    • pp.405-415
    • /
    • 2012
  • In many applications a reader needs to determine whether a particular tag exists within a group of tags without a server. This is referred to as serverless RFID tag searching. A few protocols for the serverless RFID searching are proposed but they are the single tag search protocol which can search a tag at one time. In this paper, we propose a multi-tag search protocol based on a hash function and a random number generator which can search some tags at one time. For this study, we introduce a protocol which can resolve the problem of synchronization of seeds when communication error occurs in the S3PR protocol[1], and propose a multi-tag search protocol which can reduce the communication overhead. The proposed protocol is secure against tracking attack, impersonation attack, replay attack and denial-of-service attack. This study will be the basis of research for multi-tag serach protocol.

ESTIMATION OF NITROGEN-TO-IRON ABUNDANCE RATIOS FROM LOW-RESOLUTION SPECTRA

  • Kim, Changmin;Lee, Young Sun;Beers, Timothy C.;Masseron, Thomas
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.2
    • /
    • pp.23-36
    • /
    • 2022
  • We present a method to determine nitrogen abundance ratios with respect to iron ([N/Fe]) from molecular CN-band features observed in low-resolution (R ~ 2000) stellar spectra obtained by the Sloan Digital Sky Survey (SDSS) and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). Various tests are carried out to check the systematic and random errors of our technique, and the impact of signal-to-noise (S/N) ratios of stellar spectra on the determined [N/Fe]. We find that the uncertainty of our derived [N/Fe] is less than 0.3 dex for S/N ratios larger than 10 in the ranges Teff = [4000, 6000] K, log g = [0.0, 3.5], [Fe/H] = [-3.0, 0.0], [C/Fe] = [-1.0, +4.5], and [N/Fe] = [-1.0, +4.5], the parameter space that we are interested in to identify N-enhanced stars in the Galactic halo. A star-by-star comparison with a sample of stars with [N/Fe] estimates available from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) also suggests a similar level of uncertainty in our measured [N/Fe], after removing its systematic error. Based on these results, we conclude that our method is able to reproduce [N/Fe] from low-resolution spectroscopic data, with an uncertainty sufficiently small to discover N-rich stars that presumably originated from disrupted Galactic globular clusters.

Factor augmentation for cryptocurrency return forecasting (암호화폐 수익률 예측력 향상을 위한 요인 강화)

  • Yeom, Yebin;Han, Yoojin;Lee, Jaehyun;Park, Seryeong;Lee, Jungwoo;Baek, Changryong
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.189-201
    • /
    • 2022
  • In this study, we propose factor augmentation to improve forecasting power of cryptocurrency return. We consider financial and economic variables as well as psychological aspect for possible factors. To be more specific, financial and economic factors are obtained by applying principal factor analysis. Psychological factor is summarized by news sentiment analysis. We also visualize such factors through impulse response analysis. In the modeling perspective, we consider ARIMAX as the classical model, and random forest and deep learning to accommodate nonlinear features. As a result, we show that factor augmentation reduces prediction error and the GRU performed the best amongst all models considered.

ACCB- Adaptive Congestion Control with backoff Algorithm for CoAP

  • Deshmukh, Sneha;Raisinghani, Vijay T.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.191-200
    • /
    • 2022
  • Constrained Application Protocol (CoAP) is a standardized protocol by the Internet Engineering Task Force (IETF) for the Internet of things (IoT). IoT devices have limited computation power, memory, and connectivity capabilities. One of the significant problems in IoT networks is congestion control. The CoAP standard has an exponential backoff congestion control mechanism, which may not be adequate for all IoT applications. Each IoT application would have different characteristics, requiring a novel algorithm to handle congestion in the IoT network. Unnecessary retransmissions, and packet collisions, caused due to lossy links and higher packet error rates, lead to congestion in the IoT network. This paper presents an adaptive congestion control protocol for CoAP, Adaptive Congestion Control with a Backoff algorithm (ACCB). AACB is an extension to our earlier protocol AdCoCoA. The proposed algorithm estimates RTT, RTTVAR, and RTO using dynamic factors instead of fixed values. Also, the backoff mechanism has dynamic factors to estimate the RTO value on retransmissions. This dynamic adaptation helps to improve CoAP performance and reduce retransmissions. The results show ACCB has significantly higher goodput (49.5%, 436.5%, 312.7%), packet delivery ratio (10.1%, 56%, 23.3%), and transmission rate (37.7%, 265%, 175.3%); compare to CoAP, CoCoA+ and AdCoCoA respectively in linear scenario. The results show ACCB has significantly higher goodput (60.5%, 482%,202.1%), packet delivery ratio (7.6%, 60.6%, 26%), and transmission rate (40.9%, 284%, 146.45%); compare to CoAP, CoCoA+ and AdCoCoA respectively in random walk scenario. ACCB has similar retransmission index compare to CoAp, CoCoA+ and AdCoCoA respectively in both the scenarios.

Machine Learning Methods to Predict Vehicle Fuel Consumption

  • Ko, Kwangho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.13-20
    • /
    • 2022
  • It's proposed and analyzed ML(Machine Learning) models to predict vehicle FC(Fuel Consumption) in real-time. The test driving was done for a car to measure vehicle speed, acceleration, road gradient and FC for training dataset. The various ML models were trained with feature data of speed, acceleration and road-gradient for target FC. There are two kind of ML models and one is regression type of linear regression and k-nearest neighbors regression and the other is classification type of k-nearest neighbors classifier, logistic regression, decision tree, random forest and gradient boosting in the study. The prediction accuracy is low in range of 0.5 ~ 0.6 for real-time FC and the classification type is more accurate than the regression ones. The prediction error for total FC has very low value of about 0.2 ~ 2.0% and regression models are more accurate than classification ones. It's for the coefficient of determination (R2) of accuracy score distributing predicted values along mean of targets as the coefficient decreases. Therefore regression models are good for total FC and classification ones are proper for real-time FC prediction.

Extended Analysis of Unsafe Acts violating Safety Rules caused Industrial Accidents (산재사고를 유발한 안전수칙 위반행위의 확장분석)

  • Lim, Hyeon Kyo;Ham, Seung Eon;Bak, Geon Yeong;Lee, Yong Hee
    • Journal of the Korean Society of Safety
    • /
    • v.37 no.3
    • /
    • pp.52-59
    • /
    • 2022
  • Conventionally, all the unsafe acts by human beings in relation to industrial accidents have been regarded as unintentional human errors. Exceptionally, however, in the cases with fatalities, seriously injured workers, and/or losses that evoked social issues, attention was paid to violating related laws and regulations for finding out some people to be prosecuted and given judicial punishments. As Heinrich stated, injury or loss in an accident is quite a random variable, so it can be unfair to utilize it as a criterion for prosecution or punishment. The present study was conducted to comprehend how categorizing intentional violations in unsafe acts might disrupt conventional conclusions about the industrial accident process. It was also intended to seek out the right direction for countermeasures by examining unsafe acts comprehensively rather than limiting the analysis to human errors only. In an analysis of 150 industrial accident cases that caused fatalities and featured relatively clear accident scenarios, the results showed that only 36.0% (54 cases) of the workers recognized the situation they confronted as risky, out of which 29.6% (16 cases) thought of the risk as trivial. In addition, even when the risks were recognized, most workers attempted to solve the hazardous situations in ways that violated rules or regulations. If analyzed with a focus on human errors, accidents can be attributed to personal deviations. However, if considered with an emphasis on safety rules or regulations, the focus will naturally move to the question of whether the workers intentionally violated them or not. As a consequence, failure of managerial efforts may be highlighted. Therefore, it was concluded that management should consider unsafe acts comprehensively, with violations included in principle, during accident investigations and the development of countermeasures to prevent future accidents.

3D-Distortion Based Rate Distortion Optimization for Video-Based Point Cloud Compression

  • Yihao Fu;Liquan Shen;Tianyi Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.435-449
    • /
    • 2023
  • The state-of-the-art video-based point cloud compression(V-PCC) has a high efficiency of compressing 3D point cloud by projecting points onto 2D images. These images are then padded and compressed by High-Efficiency Video Coding(HEVC). Pixels in padded 2D images are classified into three groups including origin pixels, padded pixels and unoccupied pixels. Origin pixels are generated from projection of 3D point cloud. Padded pixels and unoccupied pixels are generated by copying values from origin pixels during image padding. For padded pixels, they are reconstructed to 3D space during geometry reconstruction as well as origin pixels. For unoccupied pixels, they are not reconstructed. The rate distortion optimization(RDO) used in HEVC is mainly aimed at keeping the balance between video distortion and video bitrates. However, traditional RDO is unreliable for padded pixels and unoccupied pixels, which leads to significant waste of bits in geometry reconstruction. In this paper, we propose a new RDO scheme which takes 3D-Distortion into account instead of traditional video distortion for padded pixels and unoccupied pixels. Firstly, these pixels are classified based on the occupancy map. Secondly, different strategies are applied to these pixels to calculate their 3D-Distortions. Finally, the obtained 3D-Distortions replace the sum square error(SSE) during the full RDO process in intra prediction and inter prediction. The proposed method is applied to geometry frames. Experimental results show that the proposed algorithm achieves an average of 31.41% and 6.14% bitrate saving for D1 metric in Random Access setting and All Intra setting on geometry videos compared with V-PCC anchor.

Deep learning method for compressive strength prediction for lightweight concrete

  • Yaser A. Nanehkaran;Mohammad Azarafza;Tolga Pusatli;Masoud Hajialilue Bonab;Arash Esmatkhah Irani;Mehdi Kouhdarag;Junde Chen;Reza Derakhshani
    • Computers and Concrete
    • /
    • v.32 no.3
    • /
    • pp.327-337
    • /
    • 2023
  • Concrete is the most widely used building material, with various types including high- and ultra-high-strength, reinforced, normal, and lightweight concretes. However, accurately predicting concrete properties is challenging due to the geotechnical design code's requirement for specific characteristics. To overcome this issue, researchers have turned to new technologies like machine learning to develop proper methodologies for concrete specification. In this study, we propose a highly accurate deep learning-based predictive model to investigate the compressive strength (UCS) of lightweight concrete with natural aggregates (pumice). Our model was implemented on a database containing 249 experimental records and revealed that water, cement, water-cement ratio, fine-coarse aggregate, aggregate substitution rate, fine aggregate replacement, and superplasticizer are the most influential covariates on UCS. To validate our model, we trained and tested it on random subsets of the database, and its performance was evaluated using a confusion matrix and receiver operating characteristic (ROC) overall accuracy. The proposed model was compared with widely known machine learning methods such as MLP, SVM, and DT classifiers to assess its capability. In addition, the model was tested on 25 laboratory UCS tests to evaluate its predictability. Our findings showed that the proposed model achieved the highest accuracy (accuracy=0.97, precision=0.97) and the lowest error rate with a high learning rate (R2=0.914), as confirmed by ROC (AUC=0.971), which is higher than other classifiers. Therefore, the proposed method demonstrates a high level of performance and capability for UCS predictions.