• Title/Summary/Keyword: linear complexity

Search Result 685, Processing Time 0.027 seconds

Correlation Matrix Generation Technique with High Robustness for Subspace-based DoA Estimation (부공간 기반 도래각 추정을 위한 높은 강건성을 지닌 상관행렬 생성 기법)

  • Byeon, BuKeun
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.3
    • /
    • pp.166-171
    • /
    • 2022
  • In this paper, we propose an algorithm to improve DoA(direction of arrival) estimation performance of the subspace-based method by generating high robustness correlation matrix of the signals incident on the uniformly linear array antenna. The existing subspace-based DoA estimation method estimates the DoA by obtaining a correlation matrix and dividing it into a signal subspace and a noise subspace. However, the component of the correlation matrix obtained from the low SNR and small number of snapshots inaccurately estimates the signal subspace due to the noise component of the antenna, thereby degrading the DoA estimation performance. Therefore a robust correlation matrix is generated by arranging virtual signal vectors obtained from the existing correlation matrix in a sliding manner. As a result of simulation using MUSIC and ESPRIT, which are representative subspace-based methods,, the computational complexity increased by less than 2.5% compared to the existing correlation matrix, but both MUSIC and ESPRIT based on RMSE 1° showed superior DoA estimation performance with an SNR of 3dB or more.

An automatic rotating annular flume for cohesive sediment erosion experiments: Calibration and preliminary results

  • Steven Figueroa;Minwoo Son
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.319-319
    • /
    • 2023
  • Flows of water in the environment (e.g. in a river or estuary) generally occur in complex conditions. This complexity can hinder a general understanding of flows and their related sedimentary processes, such as erosion and deposition. To gain insight in simplified, controlled conditions, hydraulic flumes are a popular type of laboratory research equipment. Linear flumes use pumps to recirculation water. This isn't appropriate for the investigation of cohesive sediments as pumps can break fragile cohesive sediment flocs. To overcome this limitation, the rotating annular flume (RAF) was developed. While not having pumps, a side-effect is that unwanted secondary circulations can occur. To counteract this, the top and bottom lid rotate in opposite directions. Furthermore, a larger flume is considered better as it has less curvature and secondary circulation. While only a few RAFs exist, they are important for theoretical research which often underlies numerical models. Many of the first-generation of RAFs have come into disrepair. As new measurement techniques and models become available, there is still a need to research cohesive sediment erosion and deposition in facilities such as a RAF. New RAFs also can have the advantage of being automatic instead of manually operated, thus improving data quality. To further advance our understanding of cohesive sediment erosion and deposition processes, a large, automatic RAF (1.72 m radius, 0.495 m channel depth, 0.275 m channel width) has been constructed at the Hydraulic Laboratory at Chungnam National University (CNU), Korea. The RAF has the ability to simulate both unidirectional (river) and bidirectional (tide) flows with supporting instrumentation for measuring turbulence, bed shear stress, suspended sediment concentraiton, floc size, bed level, and bed density. Here we present the current status and future prospect of the CNU RAF. In the future, calibration of the rotation rate with bed shear stress and experiments with unidirectional and bidirectional flow using cohesive kaolinite are expected. Preliminary results indicate that the CNU RAF is a valuable tool for fundamental cohesive sediment transport research.

  • PDF

Enhancing Retrieval Performance for Hierarchical Compact Binary Tree (계층형 집약 이진 트리의 검색 성능 개선)

  • Kim, Sung Wan
    • Journal of Creative Information Culture
    • /
    • v.5 no.3
    • /
    • pp.345-353
    • /
    • 2019
  • Several studies have been proposed to improve storage space efficiency by expressing binary trie data structure as a linear binary bit string. Compact binary tree approach generated using one binary trie increases the key search time significantly as the binary bit string becomes very long as the size of the input key set increases. In order to reduce the key search range, a hierarchical compact binary tree technique that hierarchically expresses several small binary compact trees has been proposed. The search time increases proportionally with the number and length of binary bit streams. In this paper, we generate several binary compact trees represented by full binary tries hierarchically. The search performance is improved by allowing a path for the binary bit string corresponding to the search range to be determined through simple numeric conversion. Through the performance evaluation using the worst time and space complexity calculation, the proposed method showed the highest performance for retrieval and key insertion or deletion. In terms of space usage, the proposed method requires about 67% ~ 68% of space compared to the existing methods, showing the best space efficiency.

Modeling Soil Temperature of Sloped Surfaces by Using a GIS Technology

  • Yun, Jin I.;Taylor, S. Elwynn
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.43 no.2
    • /
    • pp.113-119
    • /
    • 1998
  • Spatial patterns of soil temperature on sloping lands are related to the amount of solar irradiance at the surface. Since soil temperature is a critical determinant of many biological processes occurring in the soil, an accurate prediction of soil temperature distribution could be beneficial to agricultural and environmental management. However, at least two problems are identified in soil temperature prediction over natural sloped surfaces. One is the complexity of converting solar irradiances to corresponding soil temperatures, and the other, if the first problem could be solved, is the difficulty in handling large volumes of geo-spatial data. Recent developments in geographic information systems (GIS) provide the opportunity and tools to spatially organize and effectively manage data for modeling. In this paper, a simple model for conversion of solar irradiance to soil temperature is developed within a GIS environment. The irradiance-temperature conversion model is based on a geophysical variable consisting of daily short- and long-wave radiation components calculated for any slope. The short-wave component is scaled to accommodate a simplified surface energy balance expression. Linear regression equations are derived for 10 and 50 cm soil temperatures by using this variable as a single determinant and based on a long term observation data set from a horizontal location. Extendability of these equations to sloped surfaces is tested by comparing the calculated data with the monthly mean soil temperature data observed in Iowa and at 12 locations near the Tennessee - Kentucky border with various slope and aspect factors. Calculated soil temperature variations agreed well with the observed data. Finally, this method is applied to a simulation study of daily mean soil temperatures over sloped corn fields on a 30 m by 30 m resolution. The outputs reveal potential effects of topography including shading by neighboring terrain as well as the slope and aspect of the land itself on the soil temperature.

  • PDF

Design of a High-Speed Data Packet Allocation Circuit for Network-on-Chip (NoC 용 고속 데이터 패킷 할당 회로 설계)

  • Kim, Jeonghyun;Lee, Jaesung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.459-461
    • /
    • 2022
  • One of the big differences between Network-on-Chip (NoC) and the existing parallel processing system based on an off-chip network is that data packet routing is performed using a centralized control scheme. In such an environment, the best-effort packet routing problem becomes a real-time assignment problem in which data packet arriving time and processing time is the cost. In this paper, the Hungarian algorithm, a representative computational complexity reduction algorithm for the linear algebraic equation of the allocation problem, is implemented in the form of a hardware accelerator. As a result of logic synthesis using the TSMC 0.18um standard cell library, the area of the circuit designed through case analysis for the cost distribution is reduced by about 16% and the propagation delay of it is reduced by about 52%, compared to the circuit implementing the original operation sequence of the Hungarian algorithm.

  • PDF

Design and Implementation of Crosstalk Canceller Using Warped Common Acoustical Poles (주파수 워핑된 공통 극점을 이용한 음향 간섭제거기의 설계 및 구현)

  • Jeong, Jae-Woong;Park, Young-Cheol;Youn, Dae-Hee;Lee, Seok-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.5
    • /
    • pp.339-346
    • /
    • 2010
  • For the implementation of the crosstalk canceller, the filters with large length are needed, which is because that the length of the filters greatly depends on the length of the head-related impulse responses. In order to reduce the length of the crosstalk cancellation filters, many methods such as frequency warping, common acoustical pole and zero (CAPZ) modeling have been researched. In this paper, we propose a new method combining these two methods. To accomplish this, we design the filters using the CAPZ modeling on the warped domain, and then, we implement the filters using the poles and zeros de-warped to the linear domain. The proposed method provides improved channel separation performance through the frequency warping and significant reduction of the complexity through the CAPZ modeling. These are confirmed through various computer simulations.

Algorithm for Cross-avoidance Bypass Routing in Numberlink Puzzle (숫자 연결 퍼즐에 관한 교차 회피 우회 경로 알고리즘)

  • Sang-Un Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.95-101
    • /
    • 2024
  • The numberlink puzzle(NLP), which non-crossings with other numbers of connection in connecting lines through empty cells between a given pair of numbers, is an NP-complete problem with no known way to solve the puzzle in polynomial time. Until now, arbitrary numbers have been selected and puzzles have been solved using trial-and-error methods. This paper converts empty cells into vertices in lattice graphs connected by edge between adjacent cells for a given problem. Next, a straight line was drawn between the pairs of numbers and divided into groups of numbers where crossing occurred. A bypass route was established to avoid intersection in the cross-number group. Applying the proposed algorithm to 18 benchmarking data showed that the puzzle could be solved with a linear time complexity of O(n) for all data.

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Design and Performance Evaluation of Selective DFT Spreading Method for PAPR Reduction in Uplink OFDMA System (OFDMA 상향 링크 시스템에서 PAPR 저감을 위한 선택적 DFT Spreading 기법의 설계와 성능 평가)

  • Kim, Sang-Woo;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.3 s.118
    • /
    • pp.248-256
    • /
    • 2007
  • In this paper, we propose a selective DFT spreading method to solve a high PAPR problem in uplink OFDMA system. A selective characteristic is added to the DFT spreading, so the DFT spreading method is mixed with SLM method. However, to minimize increment of computational complexity, differently with common SLM method, our proposed method uses only one DFT spreading block. After DFT, several copy branches are generated by multiplying with each different matrix. This matrix is obtained by linear transforming the each phase rotation in front of DFT block. And it has very lower computational complexity than one DFT process. For simulation, we suppose that the 512 point IFFT is used, the number of effective sub-carrier is 300, the number of allowed sub-carrier to each user's is 1/4 and 1/3 and QPSK modulation is used. From the simulation result, when the number of copy branch is 4, our proposed method has more than about 5.2 dB PAPR reduction effect. It is about 1.8 dB better than common DFT spreading method and 0.95 dB better than common SLM which uses 32 copy branches. And also, when the number of copy branch is 2, it is better than SLM using 32 copy branches. From the comparison, the proposed method has 91.79 % lower complexity than SLM using 32 copy branches in similar PAPR reduction performance. So, we can find a very good performance of our proposed method. Also, we can expect the similar performance when all number of sub-carrier is allocated to one user like the OFDM.

Variation of Hospital Costs and Product Heterogeneity

  • Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.11 no.1
    • /
    • pp.123-127
    • /
    • 1978
  • The major objective of this research is to identify those hospital characteristics that best explain cost variation among hospitals and to formulate linear models that can predict hospital costs. Specific emphasis is placed on hospital output, that is, the identification of diagnosis related patient groups (DRGs) which are medically meaningful and demonstrate similar patterns of hospital resource consumption. A casemix index is developed based on the DRGs identified. Considering the common problems encountered in previous hospital cost research, the following study requirements are estab-lished for fulfilling the objectives of this research: 1. Selection of hospitals that exercise similar medical and fiscal practices. 2. Identification of an appropriate data collection mechanism in which demographic and medical characteristics of individual patients as well as accurate and comparable cost information can be derived. 3. Development of a patient classification system in which all the patients treated in hospitals are able to be split into mutually exclusive categories with consistent and stable patterns of resource consumption. 4. Development of a cost finding mechanism through which patient groups' costs can be made comparable across hospitals. A data set of Medicare patients prepared by the Social Security Administration was selected for the study analysis. The data set contained 27,229 record abstracts of Medicare patients discharged from all but one short-term general hospital in Connecticut during the period from January 1, 1971, to December 31, 1972. Each record abstract contained demographic and diagnostic information, as well as charges for specific medical services received. The 'AUT-OGRP System' was used to generate 198 DRGs in which the entire range of Medicare patients were split into mutually exclusive categories, each of which shows a consistent and stable pattern of resource consumption. The 'Departmental Method' was used to generate cost information for the groups of Medicare patients that would be comparable across hospitals. To fulfill the study objectives, an extensive analysis was conducted in the following areas: 1. Analysis of DRGs: in which the level of resource use of each DRG was determined, the length of stay or death rate of each DRG in relation to resource use was characterized, and underlying patterns of the relationships among DRG costs were explained. 2. Exploration of resource use profiles of hospitals; in which the magnitude of differences in the resource uses or death rates incurred in the treatment of Medicare patients among the study hospitals was explored. 3. Casemix analysis; in which four types of casemix-related indices were generated, and the significance of these indices in the explanation of hospital costs was examined. 4. Formulation of linear models to predict hospital costs of Medicare patients; in which nine independent variables (i. e., casemix index, hospital size, complexity of service, teaching activity, location, casemix-adjusted death. rate index, occupancy rate, and casemix-adjusted length of stay index) were used for determining factors in hospital costs. Results from the study analysis indicated that: 1. The system of 198 DRGs for Medicare patient classification was demonstrated not only as a strong tool for determining the pattern of hospital resource utilization of Medicare patients, but also for categorizing patients by their severity of illness. 2. The wei틴fed mean total case cost (TOTC) of the study hospitals for Medicare patients during the study years was $11,27.02 with a standard deviation of $117.20. The hospital with the highest average TOTC ($1538.15) was 2.08 times more expensive than the hospital with the lowest average TOTC ($743.45). The weighted mean per diem total cost (DTOC) of the study hospitals for Medicare patients during the sutdy years was $107.98 with a standard deviation of $15.18. The hospital with the highest average DTOC ($147.23) was 1.87 times more expensive than the hospital with the lowest average DTOC ($78.49). 3. The linear models for each of the six types of hospital costs were formulated using the casemix index and the eight other hospital variables as the determinants. These models explained variance to the extent of 68.7 percent of total case cost (TOTC), 63.5 percent of room and board cost (RMC), 66.2 percent of total ancillary service cost (TANC), 66.3 percent of per diem total cost (DTOC), 56.9 percent of per diem room and board cost (DRMC), and 65.5 percent of per diem ancillary service cost (DTANC). The casemix index alone explained approximately one half of interhospital cost variation: 59.1 percent for TOTC and 44.3 percent for DTOC. Thsee results demonstrate that the casemix index is the most importand determinant of interhospital cost variation Future research and policy implications in regard to the results of this study is envisioned in the following three areas: 1. Utilization of casemix related indices in the Medicare data systems. 2. Refinement of data for hospital cost evaluation. 3. Development of a system for reimbursement and cost control in hospitals.

  • PDF