• Title/Summary/Keyword: Performance Metrics

Search Result 812, Processing Time 0.031 seconds

An Efficient Load-Balancing Algorithm based on Bandwidth Reservation Scheme in Wireless Multimedia Networks (무선 멀티미디어 망에서 대역폭 예약을 이용한 효율적인 부하 균형 알고리즘)

  • 정영석;우매리;김종근
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.4
    • /
    • pp.441-449
    • /
    • 2002
  • For multimedia traffics to be supported successfully in wireless network environment, it is necessary to provide Qualify-of-Service(QoS) guarantees among mobile hosts(clients). In order to guarantee the QoS, we have to keep the call blocking probability below a target value during hand-off session. However, the QoS negotiated between the client and the network may not be guaranteed due to lack of available channels for traffic of a new cell, since on service mobile clients should be able to continue their sessions. In this paper, we propose an efficient load-balancing algorithm based on the adaptive bandwidth reservation scheme for enlarging available channels in a cell. Proposed algorithm predicts the direction of clients in a cell and adjusts the amount of the channel to be reserved according to the load status of the cell. This method is used to reserve a part of bandwidths of a cell for hand-off calls to its adjacent cells and this reserved bandwidth can be used for hand-off call prior to new connection requests. If the number of free channels is also under a low threshold value, our scheme use a load-balancing algorithm with an adaptive bandwidth reservation. In order to evaluate the performance of our algorithm, we measure metrics such as the blocking probability of new calls and dropping probability of hand-off calls, and compare with those of existing schemes.

  • PDF

Detecting Errors in POS-Tagged Corpus on XGBoost and Cross Validation (XGBoost와 교차검증을 이용한 품사부착말뭉치에서의 오류 탐지)

  • Choi, Min-Seok;Kim, Chang-Hyun;Park, Ho-Min;Cheon, Min-Ah;Yoon, Ho;Namgoong, Young;Kim, Jae-Kyun;Kim, Jae-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.221-228
    • /
    • 2020
  • Part-of-Speech (POS) tagged corpus is a collection of electronic text in which each word is annotated with a tag as the corresponding POS and is widely used for various training data for natural language processing. The training data generally assumes that there are no errors, but in reality they include various types of errors, which cause performance degradation of systems trained using the data. To alleviate this problem, we propose a novel method for detecting errors in the existing POS tagged corpus using the classifier of XGBoost and cross-validation as evaluation techniques. We first train a classifier of a POS tagger using the POS-tagged corpus with some errors and then detect errors from the POS-tagged corpus using cross-validation, but the classifier cannot detect errors because there is no training data for detecting POS tagged errors. We thus detect errors by comparing the outputs (probabilities of POS) of the classifier, adjusting hyperparameters. The hyperparameters is estimated by a small scale error-tagged corpus, in which text is sampled from a POS-tagged corpus and which is marked up POS errors by experts. In this paper, we use recall and precision as evaluation metrics which are widely used in information retrieval. We have shown that the proposed method is valid by comparing two distributions of the sample (the error-tagged corpus) and the population (the POS-tagged corpus) because all detected errors cannot be checked. In the near future, we will apply the proposed method to a dependency tree-tagged corpus and a semantic role tagged corpus.

End-to-end Packet Statistics Analysis using OPNET Modeler Wireless Suite (OPNET Modeler Wireless Suite를 이용한 종단간 패킷 통계 분석)

  • Kim, Jeong-Su
    • The KIPS Transactions:PartC
    • /
    • v.18C no.4
    • /
    • pp.265-278
    • /
    • 2011
  • The objective of this paper is to analyze and characterize end-to-end packet statistics after modeling and simulation of WiFi (IEEE 802.11g) and WiMAX (IEEE 802.16e) of a virtual wireless network using OPNET Modeler Wireless Suite. Wireless internal and external network simulators such as Remcom's Wireless InSite Real Time (RT) module, WinProp: W-LAN/Fixed WiMAX/Mobile WiMAX, and SMI system, are designed to consider data transfer rate based on wireless propagation signal strength. However, we approached our research in a different perspective without support for characteristic of these wireless network simulators. That is, we will discuss the purpose of a visual analysis for these packets, how to receive each point packets (e.g., wireless user, base station or access point, and http server) through end-to-end virtual network modeling based on integrated wired and wireless network without wireless propagation signal strength. Measuring packet statistics is important in QoS metric analysis among wireless network performance metrics. Clear packet statistics is an especially essential metric in guaranteeing QoS for WiMAX users. We have found some interesting results through modeling and simulation for virtual wireless network using OPNET Modeler Wireless Suite. We are also able to analyze multi-view efficiency through experiment/observation result.

An evaluation methodology for cement concrete lining crack segmentation deep learning model (콘크리트 라이닝 균열 분할 딥러닝 모델 평가 방법)

  • Ham, Sangwoo;Bae, Soohyeon;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.6
    • /
    • pp.513-524
    • /
    • 2022
  • Recently, detecting damages of civil infrastructures from digital images using deep learning technology became a very popular research topic. In order to adapt those methodologies to the field, it is essential to explain robustness of deep learning models. Our research points out that the existing pixel-based deep learning model evaluation metrics are not sufficient for detecting cracks since cracks have linear appearance, and proposes a new evaluation methodology to explain crack segmentation deep learning model more rationally. Specifically, we design, implement and validate a methodology to generate tolerance buffer alongside skeletonized ground truth data and prediction results to consider overall similarity of topology of the ground truth and the prediction rather than pixel-wise accuracy. We could overcome over-estimation or under-estimation problem of crack segmentation model evaluation through using our methodology, and we expect that our methodology can explain crack segmentation deep learning models better.

Comparative assessment of frost event prediction models using logistic regression, random forest, and LSTM networks (로지스틱 회귀, 랜덤포레스트, LSTM 기법을 활용한 서리예측모형 평가)

  • Chun, Jong Ahn;Lee, Hyun-Ju;Im, Seul-Hee;Kim, Daeha;Baek, Sang-Soo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.9
    • /
    • pp.667-680
    • /
    • 2021
  • We investigated changes in frost days and frost-free periods and to comparatively assess frost event prediction models developed using logistic regression (LR), random forest (RF), and long short-term memory (LSTM) networks. The meteorological variables for the model development were collected from the Suwon, Cheongju, and Gwangju stations for the period of 1973-2019 for spring (March - May) and fall (September - November). The developed models were then evaluated by Precision, Recall, and f-1 score and graphical evaluation methods such as AUC and reliability diagram. The results showed that significant decreases (significance level of 0.01) in the frequencies of frost days were at the three stations in both spring and fall. Overall, the evaluation metrics showed that the performance of RF was highest, while that of LSTM was lowest. Despite higher AUC values (above 0.9) were found at the three stations, reliability diagrams showed inconsistent reliability. A further study is suggested on the improvement of the predictability of both frost events and the first and last frost days by the frost event prediction models and reliability of the models. It would be beneficial to replicate this study at more stations in other regions.

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

Diffusion Tensor-Derived Properties of Benign Oligemia, True "at Risk" Penumbra, and Infarct Core during the First Three Hours of Stroke Onset: A Rat Model

  • Chiu, Fang-Ying;Kuo, Duen-Pang;Chen, Yung-Chieh;Kao, Yu-Chieh;Chung, Hsiao-Wen;Chen, Cheng-Yu
    • Korean Journal of Radiology
    • /
    • v.19 no.6
    • /
    • pp.1161-1171
    • /
    • 2018
  • Objective: The aim of this study was to investigate diffusion tensor (DT) imaging-derived properties of benign oligemia, true "at risk" penumbra (TP), and the infarct core (IC) during the first 3 hours of stroke onset. Materials and Methods: The study was approved by the local animal care and use committee. DT imaging data were obtained from 14 rats after permanent middle cerebral artery occlusion (pMCAO) using a 7T magnetic resonance scanner (Bruker) in room air. Relative cerebral blood flow and apparent diffusion coefficient (ADC) maps were generated to define oligemia, TP, IC, and normal tissue (NT) every 30 minutes up to 3 hours. Relative fractional anisotropy (rFA), pure anisotropy (rq), diffusion magnitude (rL), ADC (rADC), axial diffusivity (rAD), and radial diffusivity (rRD) values were derived by comparison with the contralateral normal brain. Results: The mean volume of oligemia was $24.7{\pm}14.1mm^3$, that of TP was $81.3{\pm}62.6mm^3$, and that of IC was $123.0{\pm}85.2mm^3$ at 30 minutes after pMCAO. rFA showed an initial paradoxical 10% increase in IC and TP, and declined afterward. The rq, rL, rADC, rAD, and rRD showed an initial discrepant decrease in IC (from -24% to -36%) as compared with TP (from -7% to -13%). Significant differences (p < 0.05) in metrics, except rFA, were found between tissue subtypes in the first 2.5 hours. The rq demonstrated the best overall performance in discriminating TP from IC (accuracy = 92.6%, area under curve = 0.93) and the optimal cutoff value was -33.90%. The metric values for oligemia and NT remained similar at all time points. Conclusion: Benign oligemia is small and remains microstructurally normal under pMCAO. TP and IC show a distinct evolution of DT-derived properties within the first 3 hours of stroke onset, and are thus potentially useful in predicting the fate of ischemic brain.

Sketch-based 3D object retrieval using Wasserstein Center Loss (Wasserstein Center 손실을 이용한 스케치 기반 3차원 물체 검색)

  • Ji, Myunggeun;Chun, Junchul;Kim, Namgi
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.91-99
    • /
    • 2018
  • Sketch-based 3D object retrieval is a convenient way to search for various 3D data using human-drawn sketches as query. In this paper, we propose a new method of using Sketch CNN, Wasserstein CNN and Wasserstein center loss for sketch-based 3D object search. Specifically, Wasserstein center loss is a method of learning the center of each object category and reducing the Wasserstein distance between center and features of the same category. To do this, the proposed 3D object retrieval is performed as follows. Firstly, Wasserstein CNN extracts 2D images taken from various directions of 3D object using CNN, and extracts features of 3D data by computing the Wasserstein barycenters of features of each image. Secondly, the features of the sketch are extracted using a separate Sketch CNN. Finally, we learn the features of the extracted 3D object and the features of the sketch using the proposed Wasserstein center loss. In order to demonstrate the superiority of the proposed method, we evaluated two sets of benchmark data sets, SHREC 13 and SHREC 14, and the proposed method shows better performance in all conventional metrics compared to the state of the art methods.

The MeSH-Term Query Expansion Models using LDA Topic Models in Health Information Retrieval (MeSH 기반의 LDA 토픽 모델을 이용한 검색어 확장)

  • You, Sukjin
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.1
    • /
    • pp.79-108
    • /
    • 2021
  • Information retrieval in the health field has several challenges. Health information terminology is difficult for consumers (laypeople) to understand. Formulating a query with professional terms is not easy for consumers because health-related terms are more familiar to health professionals. If health terms related to a query are automatically added, it would help consumers to find relevant information. The proposed query expansion (QE) models show how to expand a query using MeSH terms. The documents were represented by MeSH terms (i.e. Bag-of-MeSH), found in the full-text articles. And then the MeSH terms were used to generate LDA (Latent Dirichlet Analysis) topic models. A query and the top k retrieved documents were used to find MeSH terms as topic words related to the query. LDA topic words were filtered by threshold values of topic probability (TP) and word probability (WP). Threshold values were effective in an LDA model with a specific number of topics to increase IR performance in terms of infAP (inferred Average Precision) and infNDCG (inferred Normalized Discounted Cumulative Gain), which are common IR metrics for large data collections with incomplete judgments. The top k words were chosen by the word score based on (TP *WP) and retrieved document ranking in an LDA model with specific thresholds. The QE model with specific thresholds for TP and WP showed improved mean infAP and infNDCG scores in an LDA model, comparing with the baseline result.

Detecting and Extracting Changed Objects in Ground Information (지반정보 변화객체 탐지·추출 시스템 개발)

  • Kim, Kwangsoo;Kim, Bong Wan;Jang, In Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.515-523
    • /
    • 2021
  • An integrated underground spatial map consists of underground facilities, underground structures, and ground information, and is periodically updated. In this paper, we design and implement a system for detecting and extracting only changed ground objects to shorten the map update speed. To find the changed objects, all the objects are compared, which are included in the newly input map and the reference map in the integrated map. Since the entire process of comparing objects and generating results is classified by function, the implemented system is composed of several modules such as object comparer, changed object detector, history data manager, changed object extractor, changed type classifier, and changed object saver. We use two metrics: detection rate and extraction rate, to evaluate the performance of the system. As a result of applying the system to boreholes, ground wells, soil layers, and rock floors in Pyeongtaek, 100% of inserted, deleted, and updated objects in each layer are detected. In addition, it provides the advantage of ensuring the up-to-dateness of the reference map by downloading it whenever maps are compared. In the future, additional research is needed to confirm the stability and effectiveness of the developed system using various data to apply it to the field.