• 제목/요약/키워드: weighted algorithm

검색결과 1,105건 처리시간 0.027초

실측 강우 연계에 따른 호우피해예상도 표출 보간 알고리즘에 관한 연구 (A Study on Expression Interpolation Algorithm of Hazard Mapping for Damaged from flood According to Real Rainfall Linkage)

  • 임소망;유완식;황의호
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2018년도 학술발표회
    • /
    • pp.381-381
    • /
    • 2018
  • 우리나라에서는 지속적인 자연재해로 각기 다른 필요성과 목적에 따라 다양한 형태의 홍수 침수 관련 지도가 작성되어 왔다. 연구 성과로 작성된 계획 빈도 및 상위 2개 빈도의 호우피해예상도를 실측 강우와 연계하여 재난관리단계별 대응단계에 활용하기 위해 실시간 피해위험구역을 표출하고자 한다. 본 연구는 실시간으로 피해위험구역을 표출하기 위해 실측 강우와 연계된 호우피해예상도에 공간 보간 알고리즘을 적용하고자 한다. 호우피해예상도란 돌발호우나 태풍으로 인하여 홍수가 발생하면 인명 및 재산피해를 최소화하기 위해 홍수지역을 미리 예측 가능하도록 제작된 지도이다. 지형자료(DEM), 하천 중심선(Stream Centerline), 하천 횡단면(Cross-Section Line), 제방고(Bank), 수문기상 자료(Hydrological Data), 조도계수(Roughness) 등을 사용하여 하천법 제 21조와 하천법시행령 제 17조를 근거로 작성된다. 본 연구에서는 호우피해예상도에 IDW(Inverse Distance Weighted, 역거리가중법) 보간, TIN(Triangulated Irregular Network system, 불규칙삼각망) 보간, Kriging 보간 방법 적용 알고리즘을 제시하고자 하였다. 호우피해예상도에 보간 알고리즘을 적용하기 위해 보간 방법에 따른 적용사례를 분석하였으며 그 결과, 보간 알고리즘을 적용한 호우피해예상도 보간을 통하여 계획빈도 및 상위 2개 빈도 이외의 빈도(하위빈도-계획빈도, 계획빈도-상위빈도 구간)에 대한 호우피해예상도의 피해위험구역 구현 방안을 제시하였다. 호우피해예상도에 IDW, TIN, Kriging 보간 알고리즘을 적용하여 계획빈도 및 상위빈도 이외의 빈도에 대한 피해위험구역을 표출 할 수 있다. 표출된 계획빈도 및 상위빈도 이외의 빈도를 지점확률강우량-빈도에 대한 Matching table을 통하여 실측 강우와 연계 가능하다. 본 연구 결과는 추후 풍수해피해예측시스템에 활용하여 재난관리단계별 예방 및 대응 단계에 활용 할 수 있을 것으로 판단된다.

  • PDF

Risk assessment of karst collapse using an integrated fuzzy analytic hierarchy process and grey relational analysis model

  • Ding, Hanghang;Wu, Qiang;Zhao, Dekang;Mu, Wenping;Yu, Shuai
    • Geomechanics and Engineering
    • /
    • 제18권5호
    • /
    • pp.515-525
    • /
    • 2019
  • A karst collapse, as a natural hazard, is totally different to a normal collapse. In recent years, karst collapses have caused substantial economic losses and even threatened human safety. A risk assessment model for karst collapse was developed based on the fuzzy analytic hierarchy process (FAHP) and grey relational analysis (GRA), which is a simple and effective mathematical algorithm. An evaluation index played an important role in the process of completing the risk assessment model. In this study, the proposed model was applied to Jiaobai village in southwest China. First, the main controlling factors were summarized as an evaluation index of the model based on an investigation and statistical analysis of the natural formation law of karst collapse. Second, the FAHP was used to determine the relative weights and GRA was used to calculate the grey relational coefficient among the indices. Finally, the relational sequence of evaluation objects was established by calculating the grey weighted relational degree. According to the maximum relational rule, the greater the relational degree the better the relational degree with the hierarchy set. The results showed that the model accurately simulated the field condition. It is also demonstrated the contribution of various control factors to the process of karst collapse and the degree of collapse in the study area.

Deriving the Effective Atomic Number with a Dual-Energy Image Set Acquired by the Big Bore CT Simulator

  • Jung, Seongmoon;Kim, Bitbyeol;Kim, Jung-in;Park, Jong Min;Choi, Chang Heon
    • Journal of Radiation Protection and Research
    • /
    • 제45권4호
    • /
    • pp.171-177
    • /
    • 2020
  • Background: This study aims to determine the effective atomic number (Zeff) from dual-energy image sets obtained using a conventional computed tomography (CT) simulator. The estimated Zeff can be used for deriving the stopping power and material decomposition of CT images, thereby improving dose calculations in radiation therapy. Materials and Methods: An electron-density phantom was scanned using Philips Brilliance CT Big Bore at 80 and 140 kVp. The estimated Zeff values were compared with those obtained using the calibration phantom by applying the Rutherford, Schneider, and Joshi methods. The fitting parameters were optimized using the nonlinear least squares regression algorithm. The fitting curve and mass attenuation data were obtained from the National Institute of Standards and Technology. The fitting parameters obtained from stopping power and material decomposition of CT images, were validated by estimating the residual errors between the reference and calculated Zeff values. Next, the calculation accuracy of Zeff was evaluated by comparing the calculated values with the reference Zeff values of insert plugs. The exposure levels of patients under additional CT scanning at 80, 120, and 140 kVp were evaluated by measuring the weighted CT dose index (CTDIw). Results and Discussion: The residual errors of the fitting parameters were lower than 2%. The best and worst Zeff values were obtained using the Schneider and Joshi methods, respectively. The maximum differences between the reference and calculated values were 11.3% (for lung during inhalation), 4.7% (for adipose tissue), and 9.8% (for lung during inhalation) when applying the Rutherford, Schneider, and Joshi methods, respectively. Under dual-energy scanning (80 and 140 kVp), the patient exposure level was approximately twice that in general single-energy scanning (120 kVp). Conclusion: Zeff was calculated from two image sets scanned by conventional single-energy CT simulator. The results obtained using three different methods were compared. The Zeff calculation based on single-energy exhibited appropriate feasibility.

Weight Adjustment Scheme Based on Hop Count in Q-routing for Software Defined Networks-enabled Wireless Sensor Networks

  • Godfrey, Daniel;Jang, Jinsoo;Kim, Ki-Il
    • Journal of information and communication convergence engineering
    • /
    • 제20권1호
    • /
    • pp.22-30
    • /
    • 2022
  • The reinforcement learning algorithm has proven its potential in solving sequential decision-making problems under uncertainties, such as finding paths to route data packets in wireless sensor networks. With reinforcement learning, the computation of the optimum path requires careful definition of the so-called reward function, which is defined as a linear function that aggregates multiple objective functions into a single objective to compute a numerical value (reward) to be maximized. In a typical defined linear reward function, the multiple objectives to be optimized are integrated in the form of a weighted sum with fixed weighting factors for all learning agents. This study proposes a reinforcement learning -based routing protocol for wireless sensor network, where different learning agents prioritize different objective goals by assigning weighting factors to the aggregated objectives of the reward function. We assign appropriate weighting factors to the objectives in the reward function of a sensor node according to its hop-count distance to the sink node. We expect this approach to enhance the effectiveness of multi-objective reinforcement learning for wireless sensor networks with a balanced trade-off among competing parameters. Furthermore, we propose SDN (Software Defined Networks) architecture with multiple controllers for constant network monitoring to allow learning agents to adapt according to the dynamics of the network conditions. Simulation results show that our proposed scheme enhances the performance of wireless sensor network under varied conditions, such as the node density and traffic intensity, with a good trade-off among competing performance metrics.

VM Scheduling for Efficient Dynamically Migrated Virtual Machines (VMS-EDMVM) in Cloud Computing Environment

  • Supreeth, S.;Patil, Kirankumari
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.1892-1912
    • /
    • 2022
  • With the massive demand and growth of cloud computing, virtualization plays an important role in providing services to end-users efficiently. However, with the increase in services over Cloud Computing, it is becoming more challenging to manage and run multiple Virtual Machines (VMs) in Cloud Computing because of excessive power consumption. It is thus important to overcome these challenges by adopting an efficient technique to manage and monitor the status of VMs in a cloud environment. Reduction of power/energy consumption can be done by managing VMs more effectively in the datacenters of the cloud environment by switching between the active and inactive states of a VM. As a result, energy consumption reduces carbon emissions, leading to green cloud computing. The proposed Efficient Dynamic VM Scheduling approach minimizes Service Level Agreement (SLA) violations and manages VM migration by lowering the energy consumption effectively along with the balanced load. In the proposed work, VM Scheduling for Efficient Dynamically Migrated VM (VMS-EDMVM) approach first detects the over-utilized host using the Modified Weighted Linear Regression (MWLR) algorithm and along with the dynamic utilization model for an underutilized host. Maximum Power Reduction and Reduced Time (MPRRT) approach has been developed for the VM selection followed by a two-phase Best-Fit CPU, BW (BFCB) VM Scheduling mechanism which is simulated in CloudSim based on the adaptive utilization threshold base. The proposed work achieved a Power consumption of 108.45 kWh, and the total SLA violation was 0.1%. The VM migration count was reduced to 2,202 times, revealing better performance as compared to other methods mentioned in this paper.

점들의 구간 커버에 대한 최대 가중치 맴버쉽 최소화 (Minimizing the Maximum Weighted Membership of Interval Cover of Points)

  • 김재훈
    • 한국정보통신학회논문지
    • /
    • 제26권10호
    • /
    • pp.1531-1536
    • /
    • 2022
  • 본 논문은 직선상에 n개의 점들과 m개의 구간들이 주어 질 때, 모든 점들을 포함하는 구간들의 집합을 구하는 문제를 다룬다. 이러한 구간들의 집합을 점들의 구간 커버(interval cover)라고 부른다. 이 문제는 NP-hard 문제로 잘 알려진 집합 커버(set cover)의 특별한 경우이다. 이 문제의 최적화 기준으로 커버하는 구간 개수의 최소화, 점을 커버하는 구간이 1개인 점들의 개수 최대화 등을 생각할 수 있다. 본 논문에서는 구간에 가중치가 주어지는 경우, 각 점을 커버하는 구간들의 가중치 합을 그 점의 맴버쉽으로 정의한다. 그리고 점들의 맴버쉽의 최대값을 최소화하는 구간 커버를 찾는 문제를 연구한다. 동적계획법 설계를 이용하여, 이전 연구의 시간 복잡도 O(nm log n)를 개선하는 O(m2)시간 알고리즘을 제안한다.

Secure and Efficient Cooperative Spectrum Sensing Against Byzantine Attack for Interweave Cognitive Radio System

  • Wu, Jun;Chen, Ze;Bao, Jianrong;Gan, Jipeng;Chen, Zehao;Zhang, Jia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권11호
    • /
    • pp.3738-3760
    • /
    • 2022
  • Due to increasing spectrum demand for new wireless devices applications, cooperative spectrum sensing (CSS) paradigm is the most promising solution to alleviate the spectrum shortage problem. However, in the interweave cognitive radio (CR) system, the inherent nature of CSS opens a hole to Byzantine attack, thereby resulting in a significant drop of the CSS security and efficiency. In view of this, a weighted differential sequential single symbol (WD3S) algorithm based on MATLAB platform is developed to accurately identify malicious users (MUs) and benefit useful sensing information from their malicious reports in this paper. In order to achieve this, a dynamic Byzantine attack model is proposed to describe malicious behaviors for MUs in an interweave CR system. On the basis of this, a method of data transmission consistency verification is formulated to evaluate the global decision's correctness and update the trust value (TrV) of secondary users (SUs), thereby accurately identifying MUs. Then, we innovatively reuse malicious sensing information from MUs by the weight allocation scheme. In addition, considering a high spectrum usage of primary network, a sequential and differential reporting way based on a single symbol is also proposed in the process of the sensing information submission. Finally, under various Byzantine attack types, we provide in-depth simulations to demonstrate the efficiency and security of the proposed WD3S.

라플라스 분포와 가중치 마스크를 이용한 AWGN 제거 (AWGN Removal using Laplace Distribution and Weighted Mask)

  • 박화정;김남호
    • 한국정보통신학회논문지
    • /
    • 제25권12호
    • /
    • pp.1846-1852
    • /
    • 2021
  • 현대 사회는 4차 산업혁명과 IoT 기술의 발전으로 폭넓은 분야에 다양한 디지털 기기들이 보급되고 있다. 하지만 영상을 획득하거나 전송하는 과정 등에서 잡음이 발생하여 정보를 훼손할 뿐 아니라, 시스템에 영향을 끼쳐 오류와 잘못된 동작을 일으킨다. 영상 잡음 중 대표적인 잡음으로 AWGN이 있다. 잡음을 제거하기 위한 방법으로 선행연구가 진행되어져 왔고 그 중 대표적인 방법으로 AF, A-TMF, MF 등이 있다. 기존의 필터들은 영상의 특성을 고려하기 어려워 고주파 성분이 많은 영역에서는 스무딩 현상이 발생한다는 단점이 있다. 따라서 제안한 알고리즘은 고주파영역에서도 효과적으로 잡음을 제거하기 위해 표준편차 분포도를 구한 후, 커브 피팅 방식을 이용한 라플라스 분포의 확률밀도함수 가중치를 적용하여 최종 출력을 구한다.

High Noise Density Median Filter Method for Denoising Cancer Images Using Image Processing Techniques

  • Priyadharsini.M, Suriya;Sathiaseelan, J.G.R
    • International Journal of Computer Science & Network Security
    • /
    • 제22권11호
    • /
    • pp.308-318
    • /
    • 2022
  • Noise is a serious issue. While sending images via electronic communication, Impulse noise, which is created by unsteady voltage, is one of the most common noises in digital communication. During the acquisition process, pictures were collected. It is possible to obtain accurate diagnosis images by removing these noises without affecting the edges and tiny features. The New Average High Noise Density Median Filter. (HNDMF) was proposed in this paper, and it operates in two steps for each pixel. Filter can decide whether the test pixels is degraded by SPN. In the first stage, a detector identifies corrupted pixels, in the second stage, an algorithm replaced by noise free processed pixel, the New average suggested Filter produced for this window. The paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. In this paper the comparison of known image denoising is discussed and a new decision based weighted median filter used to remove impulse noise. Using Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Structure Similarity Index Method (SSIM) metrics, the paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. A detailed simulation process is performed to ensure the betterment of the presented model on the Mini-MIAS dataset. The obtained experimental values stated that the HNDMF model has reached to a better performance with the maximum picture quality. images affected by various amounts of pretend salt and paper noise, as well as speckle noise, are calculated and provided as experimental results. According to quality metrics, the HNDMF Method produces a superior result than the existing filter method. Accurately detect and replace salt and pepper noise pixel values with mean and median value in images. The proposed method is to improve the median filter with a significant change.

Deep learning for the classification of cervical maturation degree and pubertal growth spurts: A pilot study

  • Mohammad-Rahimi, Hossein;Motamadian, Saeed Reza;Nadimi, Mohadeseh;Hassanzadeh-Samani, Sahel;Minabi, Mohammad A. S.;Mahmoudinia, Erfan;Lee, Victor Y.;Rohban, Mohammad Hossein
    • 대한치과교정학회지
    • /
    • 제52권2호
    • /
    • pp.112-122
    • /
    • 2022
  • Objective: This study aimed to present and evaluate a new deep learning model for determining cervical vertebral maturation (CVM) degree and growth spurts by analyzing lateral cephalometric radiographs. Methods: The study sample included 890 cephalograms. The images were classified into six cervical stages independently by two orthodontists. The images were also categorized into three degrees on the basis of the growth spurt: pre-pubertal, growth spurt, and post-pubertal. Subsequently, the samples were fed to a transfer learning model implemented using the Python programming language and PyTorch library. In the last step, the test set of cephalograms was randomly coded and provided to two new orthodontists in order to compare their diagnosis to the artificial intelligence (AI) model's performance using weighted kappa and Cohen's kappa statistical analyses. Results: The model's validation and test accuracy for the six-class CVM diagnosis were 62.63% and 61.62%, respectively. Moreover, the model's validation and test accuracy for the three-class classification were 75.76% and 82.83%, respectively. Furthermore, substantial agreements were observed between the two orthodontists as well as one of them and the AI model. Conclusions: The newly developed AI model had reasonable accuracy in detecting the CVM stage and high reliability in detecting the pubertal stage. However, its accuracy was still less than that of human observers. With further improvements in data quality, this model should be able to provide practical assistance to practicing dentists in the future.