• Title/Summary/Keyword: cosine

Search Result 1,081, Processing Time 0.023 seconds

Measuring Equal Weighted Voting to the Local Council Elections and the Apportionment: Focusing the 4th to the 6th Metropolitan Council Elections (지방의회 선거의 표의 등가성 측정과 선거구획정: 제4-6회 시·도의회의원 선거를 중심으로)

  • Kim, Jeong Do;Kim, Gyeong Il
    • Korean Journal of Legislative Studies
    • /
    • v.24 no.1
    • /
    • pp.241-276
    • /
    • 2018
  • This article measures equal weighted voting to evaluate the fairness of a redistricting system operated in the $4^{th}$ to the $6^{th}$ metropolitan council elections using a new index. The cosine square index using in the article would be useful on what we see the ratio of the equality of population among metropolitan regions and the fairness of the whole electoral system, along with its simple calculation. The results of the fairness of the $4^{th}$ to the $6^{th}$ metropolitan council elections calculated by the cosine square index are as follow: Because the $4^{th}$ metropolitan council election uniformly elects two members for each electoral district regardless of the size of the population, it has a low equality of population between districts. But as a result of the decision of the Constitutional Court in 2007, standard of population variation in electoral district has been strengthened to 4 : 1 from the $5^{th}$ metropolitan council election. It increases significantly equality of population between districts from the $5^{th}$ metropolitan council election. But in the elections from the $4^{th}$ to the $6^{th}$ metropolitan council elections, Rural electoral districts continuously show the lowest equality of population between districts. It also shows the growing disparity between urban and rural areas as well as between capital and non-capital. This paper emphasizes that electoral apportionment in local council elections should reflect regional diversity and community identity.

Evaluation of the DCT-PLS Method for Spatial Gap Filling of Gridded Data (격자자료 결측복원을 위한 DCT-PLS 기법의 활용성 평가)

  • Youn, Youjeong;Kim, Seoyeon;Jeong, Yemin;Cho, Subin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1407-1419
    • /
    • 2020
  • Long time-series gridded data is crucial for the analyses of Earth environmental changes. Climate reanalysis and satellite images are now used as global-scale periodical and quantitative information for the atmosphere and land surface. This paper examines the feasibility of DCT-PLS (penalized least square regression based on discrete cosine transform) for the spatial gap filling of gridded data through the experiments for multiple variables. Because gap-free data is required for an objective comparison of original with gap-filled data, we used LDAPS (Local Data Assimilation and Prediction System) daily data and MODIS (Moderate Resolution Imaging Spectroradiometer) monthly products. In the experiments for relative humidity, wind speed, LST (land surface temperature), and NDVI (normalized difference vegetation index), we made sure that randomly generated gaps were retrieved very similar to the original data. The correlation coefficients were over 0.95 for the four variables. Because the DCT-PLS method does not require ancillary data and can refer to both spatial and temporal information with a fast computation, it can be applied to operative systems for satellite data processing.

Personalized insurance product based on similarity (유사도를 활용한 맞춤형 보험 추천 시스템)

  • Kim, Joon-Sung;Cho, A-Ra;Oh, Hayong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.11
    • /
    • pp.1599-1607
    • /
    • 2022
  • The data mainly used for the model are as follows: the personal information, the information of insurance product, etc. With the data, we suggest three types of models: content-based filtering model, collaborative filtering model and classification models-based model. The content-based filtering model finds the cosine of the angle between the users and items, and recommends items based on the cosine similarity; however, before finding the cosine similarity, we divide into several groups by their features. Segmentation is executed by K-means clustering algorithm and manually operated algorithm. The collaborative filtering model uses interactions that users have with items. The classification models-based model uses decision tree and random forest classifier to recommend items. According to the results of the research, the contents-based filtering model provides the best result. Since the model recommends the item based on the demographic and user features, it indicates that demographic and user features are keys to offer more appropriate items.

A Study on the Method of Scholarly Paper Recommendation Using Multidimensional Metadata Space (다차원 메타데이터 공간을 활용한 학술 문헌 추천기법 연구)

  • Miah Kam;Jee Yeon Lee
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.1
    • /
    • pp.121-148
    • /
    • 2023
  • The purpose of this study is to propose a scholarly paper recommendation system based on metadata attribute similarity with excellent performance. This study suggests a scholarly paper recommendation method that combines techniques from two sub-fields of Library and Information Science, namely metadata use in Information Organization and co-citation analysis, author bibliographic coupling, co-occurrence frequency, and cosine similarity in Bibliometrics. To conduct experiments, a total of 9,643 paper metadata related to "inequality" and "divide" were collected and refined to derive relative coordinate values between author, keyword, and title attributes using cosine similarity. The study then conducted experiments to select weight conditions and dimension numbers that resulted in a good performance. The results were presented and evaluated by users, and based on this, the study conducted discussions centered on the research questions through reference node and recommendation combination characteristic analysis, conjoint analysis, and results from comparative analysis. Overall, the study showed that the performance was excellent when author-related attributes were used alone or in combination with title-related attributes. If the technique proposed in this study is utilized and a wide range of samples are secured, it could help improve the performance of recommendation techniques not only in the field of literature recommendation in information services but also in various other fields in society.

Comparison of Lambertian Model on Multi-Channel Algorithm for Estimating Land Surface Temperature Based on Remote Sensing Imagery

  • A Sediyo Adi Nugraha;Muhammad Kamal;Sigit Heru Murti;Wirastuti Widyatmanti
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.397-418
    • /
    • 2024
  • The Land Surface Temperature (LST) is a crucial parameter in identifying drought. It is essential to identify how LST can increase its accuracy, particularly in mountainous and hill areas. Increasing the LST accuracy can be achieved by applying early data processing in the correction phase, specifically in the context of topographic correction on the Lambertian model. Empirical evidence has demonstrated that this particular stage effectively enhances the process of identifying objects, especially within areas that lack direct illumination. Therefore, this research aims to examine the application of the Lambertian model in estimating LST using the Multi-Channel Method (MCM) across various physiographic regions. Lambertian model is a method that utilizes Lambertian reflectance and specifically addresses the radiance value obtained from Sun-Canopy-Sensor(SCS) and Cosine Correction measurements. Applying topographical adjustment to the LST outcome results in a notable augmentation in the dispersion of LST values. Nevertheless, the area physiography is also significant as the plains terrain tends to have an extreme LST value of ≥ 350 K. In mountainous and hilly terrains, the LST value often falls within the range of 310-325 K. The absence of topographic correction in LST results in varying values: 22 K for the plains area, 12-21 K for hilly and mountainous terrain, and 7-9 K for both plains and mountainous terrains. Furthermore, validation results indicate that employing the Lambertian model with SCS and Cosine Correction methods yields superior outcomes compared to processing without the Lambertian model, particularly in hilly and mountainous terrain. Conversely, in plain areas, the Lambertian model's application proves suboptimal. Additionally, the relationship between physiography and LST derived using the Lambertian model shows a high average R2 value of 0.99. The lowest errors(K) and root mean square error values, approximately ±2 K and 0.54, respectively, were achieved using the Lambertian model with the SCS method. Based on the findings, this research concluded that the Lambertian model could increase LST values. These corrected values are often higher than the LST values obtained without the Lambertian model.

Comparison of Classification Rate According to Parts Classification Method (부품 분류 방법에 따른 분류율 비교)

  • 이영길;안성규;정성환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.497-499
    • /
    • 1999
  • 본 논문에서는 다양한 부품에 대한 적당한 분류 방법을 찾기 위해 일반적으로 많이 사용되는 신경망을 이용하는 분류 방법과 템플리트 매칭을 이용한 분류 방법을 실험에 사용하였다. 본 연구에서는 부품 분류 방법을 부품의 분류율과 인식에 사용될 수 있는 최대 부품 수를 고려하여 비교 분석하였다. 실험결과 DCT(Discrete Cosine Transform)를 이용한 템플리트 매칭 방법이 다양한 부품을 인식하는데 있어 가장 뛰어난 분류율을 보였다.

  • PDF

A Low-Delay MDCT/IMDCT

  • Lee, Sangkil;Lee, Insung
    • ETRI Journal
    • /
    • v.35 no.5
    • /
    • pp.935-938
    • /
    • 2013
  • This letter presents an algorithm for selecting a low delay for the modified discrete cosine transform (MDCT) and inverse MDCT (IMDCT). The implementation of conventional MDCT and IMDCT requires a 50% overlap-add (OLA) for a perfect reconstruction. In the OLA process, an algorithmic delay in the frame length is employed. A reduced overlap window and MDCT/IMDCT phase shifting is used to reduce the algorithmic delay. The performance of the proposed algorithm is evaluated by applying the low-delay MDCT to the G.729.1 speech codec.

Macroblock-Level Deblocking Method to Improve Coding Efficiency for H.264/AVC

  • Le, Thanh Ha;Jung, Seung-Won;Park, Chun-Su;Ko, Sung-Jea
    • ETRI Journal
    • /
    • v.32 no.2
    • /
    • pp.336-338
    • /
    • 2010
  • A macroblock-level deblocking method is proposed for H.264/AVC, in which blocking artifacts are effectively eliminated in the discrete cosine transform domain at the macroblock encoding stage. Experimental results show that the proposed algorithm outperforms conventional H.264 in terms of coding efficiency, and the bitrate saving is up to 5.7% without reconstruction quality loss.

Transform Coding Based on Source Filter Model in the MDCT Domain

  • Sung, Jongmo;Ko, Yun-Ho
    • ETRI Journal
    • /
    • v.35 no.3
    • /
    • pp.542-545
    • /
    • 2013
  • State-of-the-art voice codecs have been developed to extend the input bandwidth to enhance quality while maintaining interoperability with a legacy codec. Most of them employ a modified discrete cosine transform (MDCT) for coding their extended band. We propose a source filter model-based coding algorithm of MDCT spectral coefficients, apply it to the ITU-T G.711.1 super wideband (SWB) extension codec, and subjectively test it to validate the model. A subjective test shows a better quality over the standardized SWB codec.

A study on application of DCT algorithm with MVP(Multimedia Video Processor) (MVP(Multimedia Video Processor)를 이용한 DCT알고리즘 구현에 관한 연구)

  • 김상기;정진현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1383-1386
    • /
    • 1997
  • Discrete cosine transform(DCT) is the most popular block transform coding in lossy mode. DCT is close to statistically optimal transform-the Karhunen Loeve transform. In this paper, a module for DCT encoder is made with TMS320C80 based on JPEG and MPEG, which are intermational standards for image compression. the DCT encoder consists of three parts-a transformer, a vector quantizer and an entropy encoder.

  • PDF