• Title/Summary/Keyword: Python Library

Search Result 55, Processing Time 0.028 seconds

Text Document Classification Scheme using TF-IDF and Naïve Bayes Classifier (TF-IDF와 Naïve Bayes 분류기를 활용한 문서 분류 기법)

  • Yoo, Jong-Yeol;Hyun, Sang-Hyun;Yang, Dong-Min
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.242-245
    • /
    • 2015
  • Recently due to large-scale data spread in digital economy, the era of big data is coming. Through big data, unstructured text data consisting of technical text document, confidential document, false information documents are experiencing serious problems in the runoff. To prevent this, the need of art to sort and process the document consisting of unstructured text data has increased. In this paper, we propose a novel text classification scheme which learns some data sets and correctly classifies unstructured text data into two different categories, True and False. For the performance evaluation, we implement our proposed scheme using $Na{\ddot{i}}ve$ Bayes document classifier and TF-IDF modules in Python library, and compare it with the existing document classifier.

  • PDF

Analysis of Behavior Patterns from Human and Web Crawler Events Log on ScienceON (ScienceON 웹 로그에 대한 인간 및 웹 크롤러 행위 패턴 분석)

  • Poositaporn, Athiruj;Jung, Hanmin;Park, Jung Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.6-8
    • /
    • 2022
  • Web log analysis is one of the essential procedures for service improvement. ScienceON is a representative information service that provides various S&T literature and information, and we analyze its logs for continuous improvement. This study aims to analyze ScienceON web logs recorded in May 2020 and May 2021, dividing them into humans and web crawlers and performing an in-depth analysis. First, only web logs corresponding to S (search), V (detail view), and D (download) types are extracted and normalized to 658,407 and 8,727,042 records for each period. Second, using the Python 'user_agents' library, the logs are classified into humans and web crawlers, and third, the session size was set to 60 seconds, and each session is analyzed. We found that web crawlers, unlike humans, show relatively long for the average behavior pattern per session, and the behavior patterns are mainly for V patterns. As the future, the service will be improved to quickly detect and respond to web crawlers and respond to the behavioral patterns of human users.

  • PDF

Deep learning for the classification of cervical maturation degree and pubertal growth spurts: A pilot study

  • Mohammad-Rahimi, Hossein;Motamadian, Saeed Reza;Nadimi, Mohadeseh;Hassanzadeh-Samani, Sahel;Minabi, Mohammad A. S.;Mahmoudinia, Erfan;Lee, Victor Y.;Rohban, Mohammad Hossein
    • The korean journal of orthodontics
    • /
    • v.52 no.2
    • /
    • pp.112-122
    • /
    • 2022
  • Objective: This study aimed to present and evaluate a new deep learning model for determining cervical vertebral maturation (CVM) degree and growth spurts by analyzing lateral cephalometric radiographs. Methods: The study sample included 890 cephalograms. The images were classified into six cervical stages independently by two orthodontists. The images were also categorized into three degrees on the basis of the growth spurt: pre-pubertal, growth spurt, and post-pubertal. Subsequently, the samples were fed to a transfer learning model implemented using the Python programming language and PyTorch library. In the last step, the test set of cephalograms was randomly coded and provided to two new orthodontists in order to compare their diagnosis to the artificial intelligence (AI) model's performance using weighted kappa and Cohen's kappa statistical analyses. Results: The model's validation and test accuracy for the six-class CVM diagnosis were 62.63% and 61.62%, respectively. Moreover, the model's validation and test accuracy for the three-class classification were 75.76% and 82.83%, respectively. Furthermore, substantial agreements were observed between the two orthodontists as well as one of them and the AI model. Conclusions: The newly developed AI model had reasonable accuracy in detecting the CVM stage and high reliability in detecting the pubertal stage. However, its accuracy was still less than that of human observers. With further improvements in data quality, this model should be able to provide practical assistance to practicing dentists in the future.

Spatio-temporal potential future drought prediction using machine learning for time series data forecast in Abomey-calavi (South of Benin)

  • Agossou, Amos;Kim, Do Yeon;Yang, Jeong-Seok
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.268-268
    • /
    • 2021
  • Groundwater resource is mostly used in Abomey-calavi (southern region of Benin) as main source of water for domestic, industrial, and agricultural activities. Groundwater intake across the region is not perfectly controlled by a network due to the presence of many private boreholes and traditional wells used by the population. After some decades, this important resource is becoming more and more vulnerable and needs more attention. For a better groundwater management in the region of Abomey-calavi, the present study attempts to predict a future probable groundwater drought using Recurrent Neural Network (RNN) for future groundwater level prediction. The RNN model was created in python using jupyter library. Six years monthly groundwater level data was used for the model calibration, two years data for the model test and the model was finaly used to predict two years future groundwater level (years 2020 and 2021). GRI was calculated for 9 wells across the area from 2012 to 2021. The GRI value in dry season (by the end of March) showed groundwater drought for the first time during the study period in 2014 as severe and moderate; from 2015 to 2021 it shows only moderate drought. The rainy season in years 2020 and 2021 is relatively wet and near normal. GRI showed no drought in rainy season during the study period but an important diminution of groundwater level between 2012 and 2021. The Pearson's correlation coefficient calculated between GRI and rainfall from 2005 to 2020 (using only three wells with times series long period data) proved that the groundwater drought mostly observed in dry season is not mainly caused by rainfall scarcity (correlation values between -0.113 and -0.083), but this could be the consequence of an overexploitation of the resource which caused the important spatial and temporal diminution observed from 2012 to 2021.

  • PDF

An AutoML-driven Antenna Performance Prediction Model in the Autonomous Driving Radar Manufacturing Process

  • So-Hyang Bak;Kwanghoon Pio Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3330-3344
    • /
    • 2023
  • This paper proposes an antenna performance prediction model in the autonomous driving radar manufacturing process. Our research work is based upon a challenge dataset, Driving Radar Manufacturing Process Dataset, and a typical AutoML machine learning workflow engine, Pycaret open-source Python library. Note that the dataset contains the total 70 data-items, out of which 54 used as input features and 16 used as output features, and the dataset is properly built into resolving the multi-output regression problem. During the data regression analysis and preprocessing phase, we identified several input features having similar correlations and so detached some of those input features, which may become a serious cause of the multicollinearity problem that affect the overall model performance. In the training phase, we train each of output-feature regression models by using the AutoML approach. Next, we selected the top 5 models showing the higher performances in the AutoML result reports and applied the ensemble method so as for the selected models' performances to be improved. In performing the experimental performance evaluation of the regression prediction model, we particularly used two metrics, MAE and RMSE, and the results of which were 0.6928 and 1.2065, respectively. Additionally, we carried out a series of experiments to verify the proposed model's performance by comparing with other existing models' performances. In conclusion, we enhance accuracy for safer autonomous vehicles, reduces manufacturing costs through AutoML-Pycaret and machine learning ensembled model, and prevents the production of faulty radar systems, conserving resources. Ultimately, the proposed model holds significant promise not only for antenna performance but also for improving manufacturing quality and advancing radar systems in autonomous vehicles.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Genetic diversity and phylogenetic relationship of Angus herds in Hungary and analyses of their production traits

  • Judit Marton;Ferenc Szabo;Attila Zsolnai;Istvan Anton
    • Animal Bioscience
    • /
    • v.37 no.2
    • /
    • pp.184-192
    • /
    • 2024
  • Objective: This study aims to investigate the genetic structure and characteristics of the Angus cattle population in Hungary. The survey was performed with the assistance of the Hungarian Hereford, Angus, Galloway Association (HHAGA). Methods: Genetic parameters of 1,369 animals from 16 Angus herds were analyzed using the genotyping results of 12 microsatellite markers with the aid of PowerMarker, Genalex, GDA-NT2021, and STRUCTURE software. Genotyping of DNA was performed using an automated genetic analyzer. Based on pairwise identity by state values of animals, the Python networkx 2.3 library was used for network analysis of the breed and to identify the central animals. Results: The observed numbers of alleles on the 12 loci under investigation ranged from 11 to 18. The average effective number of alleles was 3.201. The overall expected heterozygosity was 0.659 and the observed heterozygosity was 0.710. Four groups were detected among the 16 Angus herds. The breeders' information validated the grouping results and facilitated the comparison of birth weight, age at first calving, number of calves born and productive lifespan data between the four groups, revealing significant differences. We identified the central animals/herd of the Angus population in Hungary. The match of our group descriptions with the phenotypic data provided by the breeders further underscores the value of cooperation between breeders and researchers. Conclusion: The observation that significant differences in the measured traits occurred among the identified groups paves the way to further enhancement of breeding efficiency. Our findings have the potential to aid the development of new breeding strategies and help breeders keep the Angus populations in Hungary under genetic supervision. Based on our results the efficient use of an upcoming genomic selection can, in some cases, significantly improve birth weight, age at first calving, number of calves born and the productive lifespan of animals.

Patent Technology Trends of Oral Health: Application of Text Mining

  • Hee-Kyeong Bak;Yong-Hwan Kim;Han-Na Kim
    • Journal of dental hygiene science
    • /
    • v.24 no.1
    • /
    • pp.9-21
    • /
    • 2024
  • Background: The purpose of this study was to utilize text network analysis and topic modeling to identify interconnected relationships among keywords present in patent information related to oral health, and subsequently extract latent topics and visualize them. By examining key keywords and specific subjects, this study sought to comprehend the technological trends in oral health-related innovations. Furthermore, it aims to serve as foundational material, suggesting directions for technological advancement in dentistry and dental hygiene. Methods: The data utilized in this study consisted of information registered over a 20-year period until July 31st, 2023, obtained from the patent information retrieval service, KIPRIS. A total of 6,865 patent titles related to keywords, such as "dentistry," "teeth," and "oral health," were collected through the searches. The research tools included a custom-designed program coded specifically for the research objectives based on Python 3.10. This program was used for keyword frequency analysis, semantic network analysis, and implementation of Latent Dirichlet Allocation for topic modeling. Results: Upon analyzing the centrality of connections among the top 50 frequently occurring words, "method," "tooth," and "manufacturing" displayed the highest centrality, while "active ingredient" had the lowest. Regarding topic modeling outcomes, the "implant" topic constituted the largest share at 22.0%, while topics concerning "devices and materials for oral health" and "toothbrushes and oral care" exhibited the lowest proportions at 5.5% each. Conclusion: Technologies concerning methods and implants are continually being researched in patents related to oral health, while there is comparatively less technological development in devices and materials for oral health. This study is expected to be a valuable resource for uncovering potential themes from a large volume of patent titles and suggesting research directions.

Remote Sensing based Algae Monitoring in Dams using High-resolution Satellite Image and Machine Learning (고해상도 위성영상과 머신러닝을 활용한 녹조 모니터링 기법 연구)

  • Jung, Jiyoung;Jang, Hyeon June;Kim, Sung Hoon;Choi, Young Don;Yi, Hye-Suk;Choi, Sunghwa
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.42-42
    • /
    • 2022
  • 지금까지도 유역에서의 녹조 모니터링은 현장채수를 통한 점 단위 모니터링에 크게 의존하고 있어 기후, 유속, 수온조건 등에 따라 수체에 광범위하게 발생하는 녹조를 효율적으로 모니터링하고 대응하기에는 어려운 점들이 있어왔다. 또한, 그동안 제한된 관측 데이터로 인해 현장 측정된 실측 데이터 보다는 녹조와 관련이 높은 NDVI, FGAI, SEI 등의 파생적인 지수를 산정하여 원격탐사자료와 매핑하는 방식의 분석연구 등이 선행되었다. 본 연구는 녹조의 모니터링시 정확도와 효율성을 향상을 목표로 하여, 우선은 녹조 측정장비를 활용, 7000개 이상의 녹조 관측 데이터를 확보하였으며, 이를 바탕으로 동기간의 고해상도 위성 자료와 실측자료를 매핑하기 위해 다양한Machine Learning기법을 적용함으로써 그 효과성을 검토하고자 하였다. 연구대상지는 낙동강 내성천 상류에 위치한 영주댐 유역으로서 데이터 수집단계에서는 면단위 현장(in-situ) 관측을 위해 2020년 2~9월까지 4회에 걸쳐 7291개의 녹조를 측정하고, 동일 시간 및 공간의 Sentinel-2자료 중 Band 1~12까지 총 13개(Band 8은 8과 8A로 2개)의 분광특성자료를 추출하였다. 다음으로 Machine Learning 분석기법의 적용을 위해 algae_monitoring Python library를 구축하였다. 개발된 library는 1) Training Set과 Test Set의 구분을 위한 Data 준비단계, 2) Random Forest, Gradient Boosting Regression, XGBoosting 알고리즘 중 선택하여 적용할 수 있는 모델적용단계, 3) 모델적용결과를 확인하는 Performance test단계(R2, MSE, MAE, RMSE, NSE, KGE 등), 4) 모델결과의 Visualization단계, 5) 선정된 모델을 활용 위성자료를 녹조값으로 변환하는 적용단계로 구분하여 영주댐뿐만 아니라 다양한 유역에 범용적으로 적용할 수 있도록 구성하였다. 본 연구의 사례에서는 Sentinel-2위성의 12개 밴드, 기상자료(대기온도, 구름비율) 총 14개자료를 활용하여 Machine Learning기법 중 Random Forest를 적용하였을 경우에, 전반적으로 가장 높은 적합도를 나타내었으며, 적용결과 Test Set을 기준으로 NSE(Nash Sutcliffe Efficiency)가 0.96(Training Set의 경우에는 0.99) 수준의 성능을 나타내어, 광역적인 위성자료와 충분히 확보된 현장실측 자료간의 데이터 학습을 통해서 조류 모니터링 분석의 효율성이 획기적으로 증대될 수 있음을 확인하였다.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.