• Title/Summary/Keyword: Linear Complexity

Search Result 681, Processing Time 0.028 seconds

Design of a High-Speed Data Packet Allocation Circuit for Network-on-Chip (NoC 용 고속 데이터 패킷 할당 회로 설계)

  • Kim, Jeonghyun;Lee, Jaesung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.459-461
    • /
    • 2022
  • One of the big differences between Network-on-Chip (NoC) and the existing parallel processing system based on an off-chip network is that data packet routing is performed using a centralized control scheme. In such an environment, the best-effort packet routing problem becomes a real-time assignment problem in which data packet arriving time and processing time is the cost. In this paper, the Hungarian algorithm, a representative computational complexity reduction algorithm for the linear algebraic equation of the allocation problem, is implemented in the form of a hardware accelerator. As a result of logic synthesis using the TSMC 0.18um standard cell library, the area of the circuit designed through case analysis for the cost distribution is reduced by about 16% and the propagation delay of it is reduced by about 52%, compared to the circuit implementing the original operation sequence of the Hungarian algorithm.

  • PDF

Design and Implementation of Crosstalk Canceller Using Warped Common Acoustical Poles (주파수 워핑된 공통 극점을 이용한 음향 간섭제거기의 설계 및 구현)

  • Jeong, Jae-Woong;Park, Young-Cheol;Youn, Dae-Hee;Lee, Seok-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.5
    • /
    • pp.339-346
    • /
    • 2010
  • For the implementation of the crosstalk canceller, the filters with large length are needed, which is because that the length of the filters greatly depends on the length of the head-related impulse responses. In order to reduce the length of the crosstalk cancellation filters, many methods such as frequency warping, common acoustical pole and zero (CAPZ) modeling have been researched. In this paper, we propose a new method combining these two methods. To accomplish this, we design the filters using the CAPZ modeling on the warped domain, and then, we implement the filters using the poles and zeros de-warped to the linear domain. The proposed method provides improved channel separation performance through the frequency warping and significant reduction of the complexity through the CAPZ modeling. These are confirmed through various computer simulations.

Algorithm for Cross-avoidance Bypass Routing in Numberlink Puzzle (숫자 연결 퍼즐에 관한 교차 회피 우회 경로 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.95-101
    • /
    • 2024
  • The numberlink puzzle(NLP), which non-crossings with other numbers of connection in connecting lines through empty cells between a given pair of numbers, is an NP-complete problem with no known way to solve the puzzle in polynomial time. Until now, arbitrary numbers have been selected and puzzles have been solved using trial-and-error methods. This paper converts empty cells into vertices in lattice graphs connected by edge between adjacent cells for a given problem. Next, a straight line was drawn between the pairs of numbers and divided into groups of numbers where crossing occurred. A bypass route was established to avoid intersection in the cross-number group. Applying the proposed algorithm to 18 benchmarking data showed that the puzzle could be solved with a linear time complexity of O(n) for all data.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Design and Performance Evaluation of Selective DFT Spreading Method for PAPR Reduction in Uplink OFDMA System (OFDMA 상향 링크 시스템에서 PAPR 저감을 위한 선택적 DFT Spreading 기법의 설계와 성능 평가)

  • Kim, Sang-Woo;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.3 s.118
    • /
    • pp.248-256
    • /
    • 2007
  • In this paper, we propose a selective DFT spreading method to solve a high PAPR problem in uplink OFDMA system. A selective characteristic is added to the DFT spreading, so the DFT spreading method is mixed with SLM method. However, to minimize increment of computational complexity, differently with common SLM method, our proposed method uses only one DFT spreading block. After DFT, several copy branches are generated by multiplying with each different matrix. This matrix is obtained by linear transforming the each phase rotation in front of DFT block. And it has very lower computational complexity than one DFT process. For simulation, we suppose that the 512 point IFFT is used, the number of effective sub-carrier is 300, the number of allowed sub-carrier to each user's is 1/4 and 1/3 and QPSK modulation is used. From the simulation result, when the number of copy branch is 4, our proposed method has more than about 5.2 dB PAPR reduction effect. It is about 1.8 dB better than common DFT spreading method and 0.95 dB better than common SLM which uses 32 copy branches. And also, when the number of copy branch is 2, it is better than SLM using 32 copy branches. From the comparison, the proposed method has 91.79 % lower complexity than SLM using 32 copy branches in similar PAPR reduction performance. So, we can find a very good performance of our proposed method. Also, we can expect the similar performance when all number of sub-carrier is allocated to one user like the OFDM.

Variation of Hospital Costs and Product Heterogeneity

  • Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.11 no.1
    • /
    • pp.123-127
    • /
    • 1978
  • The major objective of this research is to identify those hospital characteristics that best explain cost variation among hospitals and to formulate linear models that can predict hospital costs. Specific emphasis is placed on hospital output, that is, the identification of diagnosis related patient groups (DRGs) which are medically meaningful and demonstrate similar patterns of hospital resource consumption. A casemix index is developed based on the DRGs identified. Considering the common problems encountered in previous hospital cost research, the following study requirements are estab-lished for fulfilling the objectives of this research: 1. Selection of hospitals that exercise similar medical and fiscal practices. 2. Identification of an appropriate data collection mechanism in which demographic and medical characteristics of individual patients as well as accurate and comparable cost information can be derived. 3. Development of a patient classification system in which all the patients treated in hospitals are able to be split into mutually exclusive categories with consistent and stable patterns of resource consumption. 4. Development of a cost finding mechanism through which patient groups' costs can be made comparable across hospitals. A data set of Medicare patients prepared by the Social Security Administration was selected for the study analysis. The data set contained 27,229 record abstracts of Medicare patients discharged from all but one short-term general hospital in Connecticut during the period from January 1, 1971, to December 31, 1972. Each record abstract contained demographic and diagnostic information, as well as charges for specific medical services received. The 'AUT-OGRP System' was used to generate 198 DRGs in which the entire range of Medicare patients were split into mutually exclusive categories, each of which shows a consistent and stable pattern of resource consumption. The 'Departmental Method' was used to generate cost information for the groups of Medicare patients that would be comparable across hospitals. To fulfill the study objectives, an extensive analysis was conducted in the following areas: 1. Analysis of DRGs: in which the level of resource use of each DRG was determined, the length of stay or death rate of each DRG in relation to resource use was characterized, and underlying patterns of the relationships among DRG costs were explained. 2. Exploration of resource use profiles of hospitals; in which the magnitude of differences in the resource uses or death rates incurred in the treatment of Medicare patients among the study hospitals was explored. 3. Casemix analysis; in which four types of casemix-related indices were generated, and the significance of these indices in the explanation of hospital costs was examined. 4. Formulation of linear models to predict hospital costs of Medicare patients; in which nine independent variables (i. e., casemix index, hospital size, complexity of service, teaching activity, location, casemix-adjusted death. rate index, occupancy rate, and casemix-adjusted length of stay index) were used for determining factors in hospital costs. Results from the study analysis indicated that: 1. The system of 198 DRGs for Medicare patient classification was demonstrated not only as a strong tool for determining the pattern of hospital resource utilization of Medicare patients, but also for categorizing patients by their severity of illness. 2. The wei틴fed mean total case cost (TOTC) of the study hospitals for Medicare patients during the study years was $11,27.02 with a standard deviation of $117.20. The hospital with the highest average TOTC ($1538.15) was 2.08 times more expensive than the hospital with the lowest average TOTC ($743.45). The weighted mean per diem total cost (DTOC) of the study hospitals for Medicare patients during the sutdy years was $107.98 with a standard deviation of $15.18. The hospital with the highest average DTOC ($147.23) was 1.87 times more expensive than the hospital with the lowest average DTOC ($78.49). 3. The linear models for each of the six types of hospital costs were formulated using the casemix index and the eight other hospital variables as the determinants. These models explained variance to the extent of 68.7 percent of total case cost (TOTC), 63.5 percent of room and board cost (RMC), 66.2 percent of total ancillary service cost (TANC), 66.3 percent of per diem total cost (DTOC), 56.9 percent of per diem room and board cost (DRMC), and 65.5 percent of per diem ancillary service cost (DTANC). The casemix index alone explained approximately one half of interhospital cost variation: 59.1 percent for TOTC and 44.3 percent for DTOC. Thsee results demonstrate that the casemix index is the most importand determinant of interhospital cost variation Future research and policy implications in regard to the results of this study is envisioned in the following three areas: 1. Utilization of casemix related indices in the Medicare data systems. 2. Refinement of data for hospital cost evaluation. 3. Development of a system for reimbursement and cost control in hospitals.

  • PDF

A Study of Reportable Range Setting through Concentrated Control Sample (약물검사에서 관리시료의 농축을 이용한 보고 가능 범위의 설정에 대한 연구)

  • Chang, Sang Wu;Kim, Nam Yong;Choi, Ho Sung;Park, Yong Won;Yun, Keun Young
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.36 no.1
    • /
    • pp.13-18
    • /
    • 2004
  • This study was designed to establish working range for reoportable range in own laboratory in order to cover the upper and lower limits of the range in test method. We experimented ten times during 10 days for setting of reportable range with between run for method evaluation. It is generally assumed that the analytical method produces a linear response and that the test results between those upper and lower limits are then reportable. CLIA recommends that laboratories verify the reportable range of all moderate and high complexity tests. The Clinical Laboratory Improvement Amendments(CLIA) and Laboratory Accreditation Program of the Korean Society for Laboratory Medicine states reportable range is only required for "modified" moderately complex tests. Linearity requirements have been eliminated from the CLIA regulations and from others accreditation agencies, many inspectors continue to feel that linearity studies are a part of good lab practice and should be encouraged. It is important to assess the useful reportable range of a laboratory method, i.e., the lowest and highest test results that are reliable and can be reported. Manufacturers make claims for the reportable range of their methods by stating the upper and lower limits of the range. Instrument manufacturers state an operating range and a reportable range. The commercial linearity material can be used to verify this range, if it adequately covers the stated linear interval. CLIA requirements for quality control, must demonstrate that, prior to reporting patient test results, it can obtain the performance specifications for accuracy, precision, and reportable range of patient test results, comparable to those established by the manufacturer. If applicable, the laboratory must also verify the reportable range of patient test results. The reportable range of patient test results is the range of test result values over which the laboratory can establish or verify the accuracy of the instrument, kit or test system measurement response. We need to define the usable reportable range of the method so that the experiments can be properly planned and valid data can be collected. The reportable range is usually defined as the range where the analytical response of the method is linear with respect to the concentration of the analyte being measured. In conclusion, experimental results on reportable range using concentrated control sample and zero calibrators covering from highest to lowest range were salicylate $8.8{\mu}g/dL$, phenytoin $0.67{\mu}g/dL$, phenobarbital $1.53{\mu}g/dL$, primidone $0.16{\mu}g/dL$, theophylline $0.2{\mu}g/dL$, vancomycine $1.3{\mu}g/dL$, valproic acid $3.2{\mu}g/dL$, digitoxin 0.17ng/dL, carbamazepine $0.36{\mu}g/dL$ and acetaminophen $0.7{\mu}g/dL$ at minimum level and salicylate $969.9{\mu}g/dL$, phenytoin $38.1{\mu}g/dL$, phenobarbital $60.4{\mu}g/dL$, primidone $24.57{\mu}g/dL$, theophylline $39.2{\mu}g/dL$, vancomycine $83.65{\mu}g/dL$, valproic acid $147.96{\mu}g/dL$, digitoxin 5.04ng/dL, carbamazepine $19.76{\mu}g/dL$, acetaminophen $300.92{\mu}g/dL$ at maximum level.

  • PDF

Dispersity of CNT and GNF on the Polyurethane Matrix: Effect of Polyurethane Chemical Structure (폴리우레탄 분자구조 변화에 따른 CNT와 GNF의 분산특성 연구)

  • Im, Hyun-Gu;Kim, Hyo-Mi;Kim, Joo-Heon
    • Polymer(Korea)
    • /
    • v.32 no.4
    • /
    • pp.340-346
    • /
    • 2008
  • The aim of this study is to understand the effect of structure on the dispersion of both CNT and GNF in the phase of synthesized polyurethanes matrix. Various CNT/PU and GNF/PU composite films were prepared. Polyurethane having a different hard segment was blended with both CNT and GNF. PU having HDI as hard segment showed good dispersion with both CNT and GNF because of their linear structural character and molecular kinesis while PU having aromatic ring showed poor dispersion with those due to their structural complexity. Structural effect also induced the increase of its electro conductivity. The PU/CNT composite showed a bad dispersion (because of phase separation between PU matrix and CNT) but good electro conductivity at its surface (because CNT was collected on the surface of composite film due to low density of CNT). PU/CNT and PU/GNF composite films have quite low normalized sheet resistance value compared with silver/PU nanocomposite film because the fiber type filler could have much more contact points than that of sphere shaped silver particles have.

The Stratigraphy and Geologic Structure of the Metamorphic Complex in the Northwestern Area of the Kyonggi Massif (경기육괴서북부(京畿陸塊西北部)의 변성암복합체(變成岩複合體)의 층서(層序)와 지질구조(地質構造))

  • Kim, Ok Joon
    • Economic and Environmental Geology
    • /
    • v.6 no.4
    • /
    • pp.201-216
    • /
    • 1973
  • Being believed thus far to be distributed in the wide areas in the vicinity of Seoul, the capital city of Korea, the Yonchon System in its type locality in Yonchon-gun from which the name derived was never previously traced down or correlated to the Precambrian metamorphic complex in Seoul area where the present study was carried out. Due to in accessibility to Yonchon area, the writer also could not trace the system down to the area studied so as to correlate them. The present study endeavored to differentiate general stratigraphy and interprete the structure of the metamorphic complex in the area. In spite of the complexity of structure and rapid changes in lithofacies of the complex, it was succeeded to find out the key bed by which the stratigraphy and structure of the area could be straightened out. The keybeds were the Buchon limestone bed in the western parts of the area; Daisongri quartzite bed cropped out in the southeastern area; Jangrak quartzite bed scattered in the several localities in the northwest, southwest, and eastern parts of the area; and Earn quartzite bed isolated in the eastern part of the area. These keybeds together with the broad regional structure made it possible to differentiated the Precambrian rocks in ascending order into the Kyonggi metamorphic complex, Jangrak group and Chunsung group which are in clinounconformable relation, and the first complex were again separated in ascending order into Buchon, Sihung, and Yangpyong metermorphic groups. Althcugh it has being vaguely called as the Yonchon system thus far, the Kyonggi metamorphic complex have never been studied before. The complex might, however, belong to early to early-middle Precambrian age. The Jangrak and Chunsung group were correlated to the Sangwon system in North Korea by the writer (1972), but it became apparent that the rocks of the groups have different lithology and highly metamorphosd than those of the Sangwon system which has thick sequence of limestone and slightly metamorphosed. Being deposited in the margin of the basin, it is natural that the groups poccess terrestrial sediments rather than limestone, yet no explanation is at hand as to what was the cause of bringing such difference in grade of metamorphism. Thus the writer attempted to correlate the both groups to those of pre-Sangwon and post-Yonchon which might be middle to early-late Precambrian time. Judging from difference in grade of deformation and unconformity between the Kyonggi metamorphic complex, Jangrak group, and Chunsung group, three stages of orogeny were established: the Kyonggi, Jangrak orogenies, and Chunsung disturbance toward younger age. It is rather astonishing to point out that the structure of these Precambrian formations. was not effected by Daebo orogeny of Jurassic age. The post-tectonic block faulting was accompanied by these orogenies, and in consequence NNE and N-S trending faults were originated. These faulting were intermittented and repeated until Daebo orogeny at which granites intruded along these faults. The manifestation of alignment of these faults is indicated by the parallel and straight linear development of valleys and streams in the Kyonggi Massifland.

  • PDF

Spatio-temporal Change Detection of Forest Landscape in the Geumho River Watershed using Landscape Metrics (경관메트릭스를 이용한 금호강 유역 산림경관의 시·공간적 변화탐지)

  • Oh, Jeong-Hak;Park, Kyung-Hun;Jung, Sung-Gwan;Lee, Jong-Won
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.8 no.2
    • /
    • pp.81-94
    • /
    • 2005
  • The purpose of this study is to test the applicability of landscape metrics for quantifying and monitoring the landscape structure in the Geumho River watershed, which has undergone heavy environmental disturbances. Landscape metrics were computed from land cover maps(1985, 1999) for the forest patches. The number of variables were reduced from 12 metrics to 3 factors through factor analysis. These factors accounted for above 91% of the variation in the original metrics. We also determined the relative effects of land development on the changes of forest landscape structure using multiple linear regression analysis. At the forest patches, the conversion of forest to urban areas and agriculture resulted in increased fragmentation. Patch area and patch size decreased. and patch density increased as a result of the conversion of forest to agriculture($R^2=0.696$, p<0.01). The heterogeneity of patch size and complexity of patch shape mainly decreased as a result of the conversion of forest to urban areas($R^2=0.405$, p<0.01). The density of core area and edge showed the tendency increase, but there was no relationship with the conversion of forest to urban area and agriculture The future research will be needed to analyze correlations between landscape structures and specific environmental and socioeconomic landscape functions.

  • PDF