• Title/Summary/Keyword: datasets

Search Result 2,182, Processing Time 0.025 seconds

Implementation of KoBERT-based profanity detection model and FAST API server (KoBERT 기반 비속어 검출 모델 및 FAST API 서버 구현)

  • Young-Min Kim;Seung-Min Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.6
    • /
    • pp.1147-1152
    • /
    • 2024
  • This paper presents a study in which a model is built to distinguish between sentences containing profanity and those that do not, by applying transfer learning to KoBERT (Korean BERT). The model is implemented as a web service using Python's FAST API. The dataset consists of sentences collected from various online communities and social media platforms, and after a preprocessing stage, the sentences were labeled based on the presence of profanity. A classification model was built using KoBERT, and by utilizing transfer learning techniques, high accuracy in profanity detection was achieved. Additionally, a web service was implemented using FAST API, which processes text data received through POST requests from clients and returns whether profanity is present or not. This study confirms the potential of using KoBERT for profanity detection and demonstrates the feasibility of practical application through the implementation of a web service. Future research will aim to improve model performance by utilizing more diverse datasets and to implement a real-time profanity filtering system.

Key Features and Performance Evaluation of the International Standard for Learning-based Image Compression, JPEG AI (학습 기반 영상 압축 국제 표준(JPEG AI)의 주요 특징 및 성능 평가)

  • Jong-Ho Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.6
    • /
    • pp.1271-1280
    • /
    • 2024
  • JPEG AI refers to an international standard for learning-based image coding, leveraging deep learning techniques that have made groundbreaking advancements in compression performance. It addresses the rapid increase in the generation and utilization of image data, and is one of the latest standardization efforts in this field. JPEG AI aims to meet the requirements of a wide range of applications, including cloud systems, video surveillance, autonomous vehicles, image data monitoring, and media distribution. To achieve this, it reduces the bandwidth and storage space requirements by up to 50% for the same visual quality and provides a framework that allows the compressed bitstream to be directly used for computer vision and image processing tasks. This paper discusses the JPEG AI, explaining its goals, the selection of training datasets, the standardization process, a performance comparison of key proposals, and the future standardization schedule, in order to understand the characteristics of this new international standard.

Comparative Study of Regression and Time Series Analysis for Predicting Tomato Sweetness (토마토 당도 예측을 위한 회귀분석과 시계열 분석 비교 연구)

  • Nam-Gon Baek;Jin-Seong Kim;Eun-Sung Choi;Chun-Bo Sim;Se-Hoon Jung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.6
    • /
    • pp.1403-1412
    • /
    • 2024
  • Accurate sweetness prediction has become increasingly important for improving agricultural product quality and optimizing cultivation processes. Currently, fruit sweetness is primarily measured through destructive methods, while non-destructive sweetness meters are limited by partial measurement and surface damage. This study performed regression and time series analyses to predict tomato sweetness using intelligent smart farm datasets from AI Hub. We conducted regression analysis using XGBoost and LightGBM, and time series analysis using LSTM. In the experimental results, regression models recorded negative R2 values with high error rates. In contrast, time series analysis using LSTM achieved an acceptable prediction error of MAE 0.224 compared to the typical range of tomato sweetness. This suggests that tomato sweetness is closely related to time series characteristics throughout the growth period. This study demonstrates the feasibility of practical non-destructive sweetness prediction using the LSTM model and is significant in validating the effectiveness of time series analysis techniques for predicting tomato sweetness.

Galaxy-Galaxy Blending in SPHEREx Survey Data

  • Kim Dachan;Hyunmi Song;Yigon Kim;Minjin Kim;Hyunjin Shim;Dohyeong Kim;Yongjung Kim;Bomee Lee;Jeong Hwan Lee;Woong-Seob Jeong;Yujin Yang
    • Journal of The Korean Astronomical Society
    • /
    • v.57 no.1
    • /
    • pp.45-54
    • /
    • 2024
  • The Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer (SPHEREx) will provide all-sky spectral survey data covering optical to mid-infrared wavelengths with a spatial resolution of 6."2, which can be widely used to study galaxy formation and evolution. We investigate the galaxy-galaxy blending in SPHEREx datasets using the mock galaxy catalogs generated from cosmological simulations and observational data. Only ~0.7% of the galaxies will be blended with other galaxies in all-sky survey data with a limiting magnitude of 19 AB mag. However, the fraction of blended galaxies dramatically increases to ~7-9% in the deep survey area around the ecliptic poles, where the depth reaches ~22 AB mag. We examine the impact of the blending in the number count and luminosity function analyses using the SPHEREx data. We find that the number count can be overestimated by up to 10-20% in the deep regions due to the flux boosting, suggesting that the impact of galaxy-galaxy blending on the number count is moderate. However, galaxy-galaxy blending can marginally change the luminosity function by up to 50% over a wide range of redshifts. As we only employ the magnitude limit at Ks-band for the source detection, the blending fractions determined in this study should be regarded as lower limits.

Shallow subsurface structure of the Vulcano-Lipari volcanic complex, Italy, constrained by helicopter-borne aeromagnetic surveys (고해상도 항공자력탐사를 이용한 Italia Vulcano-Lipari 화산 복합체의 천부 지하 구조)

  • Okuma, Shigeo;Nakatsuka, Tadashi;Komazawa, Masao;Sugihara, Mitsuhiko;Nakano, Shun;Furukawa, Ryuta;Supper, Robert
    • Geophysics and Geophysical Exploration
    • /
    • v.9 no.1
    • /
    • pp.129-138
    • /
    • 2006
  • Helicopter-borne aeromagnetic surveys at two different times separated by three years were conducted to better understand the shallow subsurface structure of the Vulcano and Lipari volcanic complex, Aeolian Islands, southern Italy, and also to monitor the volcanic activity of the area. As there was no meaningful difference between the two magnetic datasets to imply an apparent change of the volcanic activity, the datasets were merged to produce an aeromagnetic map with wider coverage than was given by a single dataset. Apparent magnetisation intensity mapping was applied to terrain-corrected magnetic anomalies, and showed local magnetisation highs in and around Fossa Cone, suggesting heterogeneity of the cone. Magnetic modelling was conducted for three of those magnetisation highs. Each model implied the presence of concealed volcanic products overlain by pyroclastic rocks from the Fossa crater. The model for the Fossa crater area suggests a buried trachytic lava flow on the southern edge of the present crater. The magnetic model at Forgia Vecchia suggests that phreatic cones can be interpreted as resulting from a concealed eruptive centre, with thick latitic lavas that fill up Fossa Caldera. However, the distribution of lavas seems to be limited to a smaller area than was expected from drilling results. This can be explained partly by alteration of the lavas by intense hydrothermal activity, as seen at geothermal areas close to Porto Levante. The magnetic model at the north-eastern Fossa Cone implies that thick lavas accumulated as another eruption centre in the early stage of the activity of Fossa. Recent geoelectric surveys showed high-resistivity zones in the areas of the last two magnetic models.

Geoscientific land management planning in salt-affected areas* (염기화된 지역에서의 지구과학적 토지 관리 계획)

  • Abbott, Simon;Chadwick, David;Street, Greg
    • Geophysics and Geophysical Exploration
    • /
    • v.10 no.1
    • /
    • pp.98-109
    • /
    • 2007
  • Over the last twenty years, farmers in Western Australia have begun to change land management practices to minimise the effects of salinity to agricultural land. A farm plan is often used as a guide to implement changes. Most plans are based on minimal data and an understanding of only surface water flow. Thus farm plans do not effectively address the processes that lead to land salinisation. A project at Broomehill in the south-west of Western Australia applied an approach using a large suite of geospatial data that measured surface and subsurface characteristics of the regolith. In addition, other data were acquired, such as information about the climate and the agricultural history. Fundamental to the approach was the collection of airborne geophysical data over the study area. This included radiometric data reflecting soils, magnetic data reflecting bedrock geology, and SALTMAP electromagnetic data reflecting regolith thickness and conductivity. When interpreted, these datasets added paddock-scale information of geology and hydrogeology to the other datasets, in order to make on-farm and in-paddock decisions relating directly to the mechanisms driving the salinising process. The location and design of surface-water management structures such as grade banks and seepage interceptor banks was significantly influenced by the information derived from the airborne geophysical data. To evaluate the effectiveness ofthis planning., one whole-farm plan has been monitored by the Department of Agriculture and the farmer since 1996. The implemented plan shows a positive cost-benefit ratio, and the farm is now in the top 5% of farms in its regional productivity benchmarking group. The main influence of the airborne geophysical data on the farm plan was on the location of earthworks and revegetation proposals. There had to be a hydrological or hydrogeological justification, based on the site-specific data, for any infrastructure proposal. This approach reduced the spatial density of proposed works compared to other farm plans not guided by site-specific hydrogeological information.

Experiments on the stability of the spatial autocorrelation method (SPAC) and linear array methods and on the imaginary part of the SPAC coefficients as an indicator of data quality (공간자기상관법 (SPAC)의 안정성과 선형 배열법과 자료 품질 지시자로 활용되는 SPAC 계수의 허수 성분에 대한 실험)

  • Margaryan, Sos;Yokoi, Toshiaki;Hayashi, Koichi
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.1
    • /
    • pp.121-131
    • /
    • 2009
  • In recent years, microtremor array observations have been used for estimation of shear-wave velocity structures. One of the methods is the conventional spatial autocorrelation (SPAC) method, which requires simultaneous recording at least with three or four sensors. Modified SPAC methods such as 2sSPAC, and linear array methods, allow estimating shear-wave structures by using only two sensors, but suffer from instability of the spatial autocorrelation coefficient for frequency ranges higher than 1.0 Hz. Based on microtremor measurements from four different size triangular arrays and four same-size triangular and linear arrays, we have demonstrated the stability of SPAC coefficient for the frequency range from 2 to 4 or 5 Hz. The phase velocities, obtained by fitting the SPAC coefficients to the Bessel function, are also consistent up to the frequency 5 Hz. All data were processed by the SPAC method, with the exception of the spatial averaging for the linear array cases. The arrays were deployed sequentially at different times, near a site having existing Parallel Seismic (PS) borehole logging data. We also used the imaginary part of the SPAC coefficients as a data-quality indicator. Based on perturbations of the autocorrelation spectrum (and in some cases on visual examination of the record waveforms) we divided data into so-called 'reliable' and 'unreliable' categories. We then calculated the imaginary part of the SPAC spectrum for 'reliable', 'unreliable', and complete (i.e. 'reliable' and 'unreliable' datasets combined) datasets for each array, and compared the results. In the case of insufficient azimuthal distribution of the stations (the linear array) the imaginary curve shows some instability and can therefore be regarded as an indicator of insufficient spatial averaging. However, in the case of low coherency of the wavefield the imaginary curve does not show any significant instability.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

GEase-K: Linear and Nonlinear Autoencoder-based Recommender System with Side Information (GEase-K: 부가 정보를 활용한 선형 및 비선형 오토인코더 기반의 추천시스템)

  • Taebeom Lee;Seung-hak Lee;Min-jeong Ma;Yoonho Cho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.167-183
    • /
    • 2023
  • In the recent field of recommendation systems, various studies have been conducted to model sparse data effectively. Among these, GLocal-K(Global and Local Kernels for Recommender Systems) is a research endeavor combining global and local kernels to provide personalized recommendations by considering global data patterns and individual user characteristics. However, due to its utilization of kernel tricks, GLocal-K exhibits diminished performance on highly sparse data and struggles to offer recommendations for new users or items due to the absence of side information. In this paper, to address these limitations of GLocal-K, we propose the GEase-K (Global and EASE kernels for Recommender Systems) model, incorporating the EASE(Embarrassingly Shallow Autoencoders for Sparse Data) model and leveraging side information. Initially, we substitute EASE for the local kernel in GLocal-K to enhance recommendation performance on highly sparse data. EASE, functioning as a simple linear operational structure, is an autoencoder that performs highly on extremely sparse data through regularization and learning item similarity. Additionally, we utilize side information to alleviate the cold-start problem. We enhance the understanding of user-item similarities by employing a conditional autoencoder structure during the training process to incorporate side information. In conclusion, GEase-K demonstrates resilience in highly sparse data and cold-start situations by combining linear and nonlinear structures and utilizing side information. Experimental results show that GEase-K outperforms GLocal-K based on the RMSE and MAE metrics on the highly sparse GoodReads and ModCloth datasets. Furthermore, in cold-start experiments divided into four groups using the GoodReads and ModCloth datasets, GEase-K denotes superior performance compared to GLocal-K.