• Title/Summary/Keyword: cluster method

Search Result 2,498, Processing Time 0.033 seconds

The impact of leisure sports activities in older adults on wellness awareness, perceived freedom, and subjective well-being

  • Yanke Zhang;Sunmun Park
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.244-254
    • /
    • 2023
  • The purpose of the study is to determine the relationship between leisure sports activities among the elderly, wellness awareness, perceived freedom, and subjective well-being. In order to achieve the purpose of this study, the study subjects were selected as the population aged 65 or older who lived in the Gwangju Metropolitan City area in 2022 and engaged in leisure sports activities. As for the sampling method, samples were extracted using cluster random sampling. A total of 300 people, 150 male and 150 female, were sampled. The survey tool was modified and supplemented according to this study based on the questionnaire that had been verified for reliability and validity in previous studies, and all questionnaire items were composed of a 5-point scale. The statistical analysis used for data analysis was frequency analysis, exploratory factor analysis, reliability analysis, and multiple regression analysis using SPSS Windows 21.0 Version. First, it was found that the wellness perception of the elderly partially affects the perceived sense of freedom. Second, it was found that the wellness perception of the elderly partially affects psychological happiness. Third, the elderly's perceived sense of freedom was found to affect their subjective well-being. Considering these research results, in order to effectively improve the quality of life in old age, it is important to promote physical, mental, emotional, and social relationships through nature-friendly sports activities to improve subjective life motivation, satisfaction, and happiness. It can be said that it increases the sense of well-being.

Methods of Improving Operational Reliability of Oil Well Casing

  • Sergey A. Dolgikh;Irek I. Mukhamatdinov
    • Corrosion Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • Oil well casing leak is caused by contact of casing outer surface with formation electrolyte. It is usually associated with an aquifer with a high salt content or absence of a cement ring behind the casing. The only way to reduce external casing corrosion is through cathodic protection. Through cathodic polarization of casing structure, electron content in crystal lattice and electron density will increase, leading to a potential shift towards the cathodic region. At Tatneft enterprises, cathodic protection is carried out according to cluster and individual schemes. The main criterion for cathodic protection is the size of protective current. For a casing, the protective current is considered sufficient if measurements with a two-contact probe show that the electric current directed to the casing has eliminated all anode sites. To determine the value of required protective current, all methods are considered in this work. In addition, an analysis of all methods used to determine the minimum protective current of the casing is provided. Results show that the method of measuring potential drop along casing is one of the most reliable methods for determining the value of protective current.

Analysis on Types of Golf Tourism After COVID-19 by using Big Data

  • Hyun Seok Kim;Munyeong Yun;Gi-Hwan Ryu
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.1
    • /
    • pp.270-275
    • /
    • 2024
  • Introduction. In this study, purpose is to analize the types of golf tourism, inbound or outbound, by using big data and see how movement of industry is being changed and what changes have been made during and after Covid-19 in golf industry. Method Using Textom, a big data analysis tool, "golf tourism" and "Covid-19" were selected as keywords, and search frequency information of Naver and Daum was collected for a year from 1 st January, 2023 to 31st December, 2023, and data preprocessing was conducted based on this. For the suitability of the study and more accurate data, data not related to "golf tourism" was removed through the refining process, and similar keywords were grouped into the same keyword to perform analysis. As a result of the word refining process, top 36 keywords with the highest relevance and search frequency were selected and applied to this study. The top 36 keywords derived through word purification were subjected to TF-IDF analysis, visualization analysis using Ucinet6 and NetDraw programs, network analysis between keywords, and cluster analysis between each keyword through Concor analysis. Results By using big data analysis, it was found out option of oversea golf tourism is affecting on inbound golf travel. "Golf", "Tourism", "Vietnam", "Thailand" showed high frequencies, which proves that oversea golf tour is now the re-coming trends.

GAIN-QoS: A Novel QoS Prediction Model for Edge Computing

  • Jiwon Choi;Jaewook Lee;Duksan Ryu;Suntae Kim;Jongmoon Baik
    • Journal of Web Engineering
    • /
    • v.21 no.1
    • /
    • pp.27-52
    • /
    • 2021
  • With recent increases in the number of network-connected devices, the number of edge computing services that provide similar functions has increased. Therefore, it is important to recommend an optimal edge computing service, based on quality-of-service (QoS). However, in the real world, there is a cold-start problem in QoS data: highly sparse invocation. Therefore, it is difficult to recommend a suitable service to the user. Deep learning techniques were applied to address this problem, or context information was used to extract deep features between users and services. However, edge computing environment has not been considered in previous studies. Our goal is to predict the QoS values in real edge computing environments with improved accuracy. To this end, we propose a GAIN-QoS technique. It clusters services based on their location information, calculates the distance between services and users in each cluster, and brings the QoS values of users within a certain distance. We apply a Generative Adversarial Imputation Nets (GAIN) model and perform QoS prediction based on this reconstructed user service invocation matrix. When the density is low, GAIN-QoS shows superior performance to other techniques. In addition, the distance between the service and user slightly affects performance. Thus, compared to other methods, the proposed method can significantly improve the accuracy of QoS prediction for edge computing, which suffers from cold-start problem.

Whisper-Tiny Model with Federated Fine Tuning for Keyword Recognition System (키워드 인식 시스템을 위한 연합 미세 조정 활용 위스퍼-타이니 모델)

  • Shivani Sanjay Kolekar;Kyungbaek Kim
    • Annual Conference of KIPS
    • /
    • 2024.10a
    • /
    • pp.678-681
    • /
    • 2024
  • Fine-tuning is critical to enhance the model's ability to operate effectively in resource-constrained environments by incorporating domain-specific data, improving reliability, fairness, and accuracy. Large language models (LLMs) traditionally prefer centralized training due to the ease of managing vast computational resources and having direct access to large, aggregated datasets, which simplifies the optimization process. However, centralized training presents significant drawbacks, including significant delay, substantial communication costs, and slow convergence, particularly when deploying models to devices with limited resources. Our proposed framework addresses these challenges by employing a federated fine-tuning strategy with Whisper-tiny model for keyword recognition system (KWR). Federated learning allows edge devices to perform local updates without the need for constant data transmission to a central server. By selecting a cluster of clients and aggregating their updates each round based on federated averaging, this strategy accelerates convergence, reduces communication overhead, and achieves higher accuracy in comparatively less time, making it more suitable than centralized approach. By the tenth round of federated updates, the fine-tuned model demonstrates notable improvements, achieving over 95.48% test accuracy. We compare the FL-finetuning method with and centralized strategy. Our framework shows significant improvement in accuracy in fewer training rounds.

  • PDF

Implementation of the ZigBee-based Homenetwork security system using neighbor detection and ACL (이웃탐지와 ACL을 이용한 ZigBee 기반의 홈네트워크 보안 시스템 구현)

  • Park, Hyun-Moon;Park, Soo-Hyun;Seo, Hae-Moon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.1
    • /
    • pp.35-45
    • /
    • 2009
  • In an open environment such as Home Network, ZigBee Cluster comprising a plurality of Ato-cells is required to provide intense security over the movement of collected, measured data. Against this setting, various security issues are currently under discussion concerning master key control policies, Access Control List (ACL), and device sources, which all involve authentication between ZigBee devices. A variety of authentication methods including Hash Chain Method, token-key method, and public key infrastructure, have been previously studied, and some of them have been reflected in standard methods. In this context, this paper aims to explore whether a new method for searching for neighboring devices in order to detect device replications and Sybil attacks can be applied and extended to the field of security. The neighbor detection applied method is a method of authentication in which ACL information of new devices and that of neighbor devices are included and compared, using information on peripheral devices. Accordingly, this new method is designed to implement detection of malicious device attacks such as Sybil attacks and device replications as well as prevention of hacking. In addition, in reference to ITU-T SG17 and ZigBee Pro, the home network equipment, configured to classify the labels and rules into four categories including user's access rights, time, date, and day, is implemented. In closing, the results demonstrates that the proposed method performs significantly well compared to other existing methods in detecting malicious devices in terms of success rate and time taken.

Combined Image Retrieval System using Clustering and Condensation Method (클러스터링과 차원축약 기법을 통합한 영상 검색 시스템)

  • Lee Se-Han;Cho Jungwon;Choi Byung-Uk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.1 s.307
    • /
    • pp.53-66
    • /
    • 2006
  • This paper proposes the combined image retrieval system that gives the same relevance as exhaustive search method while its performance can be considerably improved. This system is combined with two different retrieval methods and each gives the same results that full exhaustive search method does. Both of them are two-stage method. One uses condensation of feature vectors, and the other uses binary-tree clustering. These two methods extract the candidate images that always include correct answers at the first stage, and then filter out the incorrect images at the second stage. Inasmuch as these methods use equal algorithm, they can get the same result as full exhaustive search. The first method condenses the dimension of feature vectors, and it uses these condensed feature vectors to compute similarity of query and images in database. It can be found that there is an optimal condensation ratio which minimizes the overall retrieval time. The optimal ratio is applied to first stage of this method. Binary-tree clustering method, searching with recursive 2-means clustering, classifies each cluster dynamically with the same radius. For preserving relevance, its range of query has to be compensated at first stage. After candidate clusters were selected, final results are retrieved by computing similarities again at second stage. The proposed method is combined with above two methods. Because they are not dependent on each other, combined retrieval system can make a remarkable progress in performance.

A Study on Characteristic Design Hourly Factor by Road Type for National Highways (일반국도 도로유형별 설계시간계수 특성에 관한 연구)

  • Ha, Jung-Ah
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.12 no.2
    • /
    • pp.52-62
    • /
    • 2013
  • Design Hourly Factor(DHF) is defined as the ratio of design hourly volume(DHV) to Average Annual Daily Traffic(AADT). Generally DHV used the 30th rank hourly volume. But this case DHV is affected by holiday volumes so the road is at risk for overdesigning. Computing K factor is available for counting 8,760 hour traffic volume, but it is impossible except permanent traffic counts. This study applied three method to make DHF, using 30th rank hourly volume to make DHF(method 1), using peak hour volume to make DHF(method 2). Another way to make DHF, rank hourly volumes ordered descending connect a curve smoothly to find the point which changes drastic(method 3). That point is design hour, thus design hourly factor is able to be computed. In addition road classified 3 type for national highway using factor analysis and cluster analysis, so we can analyze the characteristic of DHF by road type. DHF which was used method 1 is the largest at any other method. There is no difference in DHF by road type at method 2. This result shows for this reason because peak hour is hard to describe the characteristic of hourly volume change. DHF which was used method 3 is similar to HCM except recreation road but 118th rank hourly volume is appropriate.

Deep Learning Model Validation Method Based on Image Data Feature Coverage (영상 데이터 특징 커버리지 기반 딥러닝 모델 검증 기법)

  • Lim, Chang-Nam;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.375-384
    • /
    • 2021
  • Deep learning techniques have been proven to have high performance in image processing and are applied in various fields. The most widely used methods for validating a deep learning model include a holdout verification method, a k-fold cross verification method, and a bootstrap method. These legacy methods consider the balance of the ratio between classes in the process of dividing the data set, but do not consider the ratio of various features that exist within the same class. If these features are not considered, verification results may be biased toward some features. Therefore, we propose a deep learning model validation method based on data feature coverage for image classification by improving the legacy methods. The proposed technique proposes a data feature coverage that can be measured numerically how much the training data set for training and validation of the deep learning model and the evaluation data set reflects the features of the entire data set. In this method, the data set can be divided by ensuring coverage to include all features of the entire data set, and the evaluation result of the model can be analyzed in units of feature clusters. As a result, by providing feature cluster information for the evaluation result of the trained model, feature information of data that affects the trained model can be provided.

A Study on Optimum Coding Method for Correlation Processing of Radio Astronomy (전파천문 상관처리를 위한 최적 코딩 방법에 관한 연구)

  • Shin, Jae-Sik;Oh, Se-Jin;Yeom, Jae-Hwan;Roh, Duk-Gyoo;Chung, Dong-Kyu;Oh, Chung-Sik;Hwang, Ju-Yeon;So, Yo-Hwan
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.4
    • /
    • pp.139-148
    • /
    • 2015
  • In this paper, the optimum coding method is proposed by using open library in order to improve the performance of a software correlator developed for Korea-Japan Joint VLBI Correlator(KJJVC). The correlation system for VLBI observing system is generally implemented with hardware using ASIC or FPGA because the computational quantity is increased geometrically according to the participated observatory number. However, the software correlation system is recently constructed at a massive server such as a cluster using software according to the development of computing power. Since VLBI correlator implemented with hardware is able to conduct data processing with real-time or quasi real-time compared with mostly observational time, software correlation has to perform optimal data processing in coding work so as to have the same performance as that of the hardware. Therefore, in this paper, the experimental comparison was conducted by open-source based fftw library released in FFT processing stage, which is the most important part of the correlator system for performing optimum coding work in software development phase, such as general method using fftw library or methods using SSE(Streaming SIMD Extensions), shared memory, or OpenMP, and method using merged techniques listed above. Through the experimental results, the proposed optimum coding method for improving the performance of developed software correlator using fftw library, shared memory and OpenMP is effectively confirmed by reducing correlation time compared with conventional method.