• Title/Summary/Keyword: Dataset Quality

Search Result 414, Processing Time 0.027 seconds

A Divide-Conquer U-Net Based High-Quality Ultrasound Image Reconstruction Using Paired Dataset (짝지어진 데이터셋을 이용한 분할-정복 U-net 기반 고화질 초음파 영상 복원)

  • Minha Yoo;Chi Young Ahn
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.3
    • /
    • pp.118-127
    • /
    • 2024
  • Commonly deep learning methods for enhancing the quality of medical images use unpaired dataset due to the impracticality of acquiring paired dataset through commercial imaging system. In this paper, we propose a supervised learning method to enhance the quality of ultrasound images. The U-net model is designed by incorporating a divide-and-conquer approach that divides and processes an image into four parts to overcome data shortage and shorten the learning time. The proposed model is trained using paired dataset consisting of 828 pairs of low-quality and high-quality images with a resolution of 512x512 pixels obtained by varying the number of channels for the same subject. Out of a total of 828 pairs of images, 684 pairs are used as the training dataset, while the remaining 144 pairs served as the test dataset. In the test results, the average Mean Squared Error (MSE) was reduced from 87.6884 in the low-quality images to 45.5108 in the restored images. Additionally, the average Peak Signal-to-Noise Ratio (PSNR) was improved from 28.7550 to 31.8063, and the average Structural Similarity Index (SSIM) was increased from 0.4755 to 0.8511, demonstrating significant enhancements in image quality.

Construction and Effectiveness Evaluation of Multi Camera Dataset Specialized for Autonomous Driving in Domestic Road Environment (국내 도로 환경에 특화된 자율주행을 위한 멀티카메라 데이터 셋 구축 및 유효성 검증)

  • Lee, Jin-Hee;Lee, Jae-Keun;Park, Jaehyeong;Kim, Je-Seok;Kwon, Soon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.273-280
    • /
    • 2022
  • Along with the advancement of deep learning technology, securing high-quality dataset for verification of developed technology is emerging as an important issue, and developing robust deep learning models to the domestic road environment is focused by many research groups. Especially, unlike expressways and automobile-only roads, in the complex city driving environment, various dynamic objects such as motorbikes, electric kickboards, large buses/truck, freight cars, pedestrians, and traffic lights are mixed in city road. In this paper, we built our dataset through multi camera-based processing (collection, refinement, and annotation) including the various objects in the city road and estimated quality and validity of our dataset by using YOLO-based model in object detection. Then, quantitative evaluation of our dataset is performed by comparing with the public dataset and qualitative evaluation of it is performed by comparing with experiment results using open platform. We generated our 2D dataset based on annotation rules of KITTI/COCO dataset, and compared the performance with the public dataset using the evaluation rules of KITTI/COCO dataset. As a result of comparison with public dataset, our dataset shows about 3 to 53% higher performance and thus the effectiveness of our dataset was validated.

Recommendations for the Construction of a Quslity-Controlled Stress Measurement Dataset (품질이 관리된 스트레스 측정용 테이터셋 구축을 위한 제언)

  • Tai Hoon KIM;In Seop NA
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.44-51
    • /
    • 2024
  • The construction of a stress measurement detaset plays a curcial role in various modern applications. In particular, for the efficient training of artificial intelligence models for stress measurement, it is essential to compare various biases and construct a quality-controlled dataset. In this paper, we propose the construction of a stress measurement dataset with quality management through the comparison of various biases. To achieve this, we introduce strss definitions and measurement tools, the process of building an artificial intelligence stress dataset, strategies to overcome biases for quality improvement, and considerations for stress data collection. Specifically, to manage dataset quality, we discuss various biases such as selection bias, measurement bias, causal bias, confirmation bias, and artificial intelligence bias that may arise during stress data collection. Through this paper, we aim to systematically understand considerations for stress data collection and various biases that may occur during the construction of a stress dataset, contributing to the construction of a dataset with guaranteed quality by overcoming these biases.

Stock News Dataset Quality Assessment by Evaluating the Data Distribution and the Sentiment Prediction

  • Alasmari, Eman;Hamdy, Mohamed;Alyoubi, Khaled H.;Alotaibi, Fahd Saleh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.1-8
    • /
    • 2022
  • This work provides a reliable and classified stocks dataset merged with Saudi stock news. This dataset allows researchers to analyze and better understand the realities, impacts, and relationships between stock news and stock fluctuations. The data were collected from the Saudi stock market via the Corporate News (CN) and Historical Data Stocks (HDS) datasets. As their names suggest, CN contains news, and HDS provides information concerning how stock values change over time. Both datasets cover the period from 2011 to 2019, have 30,098 rows, and have 16 variables-four of which they share and 12 of which differ. Therefore, the combined dataset presented here includes 30,098 published news pieces and information about stock fluctuations across nine years. Stock news polarity has been interpreted in various ways by native Arabic speakers associated with the stock domain. Therefore, this polarity was categorized manually based on Arabic semantics. As the Saudi stock market massively contributes to the international economy, this dataset is essential for stock investors and analyzers. The dataset has been prepared for educational and scientific purposes, motivated by the scarcity of data describing the impact of Saudi stock news on stock activities. It will, therefore, be useful across many sectors, including stock market analytics, data mining, statistics, machine learning, and deep learning. The data evaluation is applied by testing the data distribution of the categories and the sentiment prediction-the data distribution over classes and sentiment prediction accuracy. The results show that the data distribution of the polarity over sectors is considered a balanced distribution. The NB model is developed to evaluate the data quality based on sentiment classification, proving the data reliability by achieving 68% accuracy. So, the data evaluation results ensure dataset reliability, readiness, and high quality for any usage.

A Brief Survey into the Field of Automatic Image Dataset Generation through Web Scraping and Query Expansion

  • Bart Dikmans;Dongwann Kang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.602-613
    • /
    • 2023
  • High-quality image datasets are in high demand for various applications. With many online sources providing manually collected datasets, a persisting challenge is to fully automate the dataset collection process. In this study, we surveyed an automatic image dataset generation field through analyzing a collection of existing studies. Moreover, we examined fields that are closely related to automated dataset generation, such as query expansion, web scraping, and dataset quality. We assess how both noise and regional search engine differences can be addressed using an automated search query expansion focused on hypernyms, allowing for user-specific manual query expansion. Combining these aspects provides an outline of how a modern web scraping application can produce large-scale image datasets.

STAR-24K: A Public Dataset for Space Common Target Detection

  • Zhang, Chaoyan;Guo, Baolong;Liao, Nannan;Zhong, Qiuyun;Liu, Hengyan;Li, Cheng;Gong, Jianglei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.365-380
    • /
    • 2022
  • The target detection algorithm based on supervised learning is the current mainstream algorithm for target detection. A high-quality dataset is the prerequisite for the target detection algorithm to obtain good detection performance. The larger the number and quality of the dataset, the stronger the generalization ability of the model, that is, the dataset determines the upper limit of the model learning. The convolutional neural network optimizes the network parameters in a strong supervision method. The error is calculated by comparing the predicted frame with the manually labeled real frame, and then the error is passed into the network for continuous optimization. Strongly supervised learning mainly relies on a large number of images as models for continuous learning, so the number and quality of images directly affect the results of learning. This paper proposes a dataset STAR-24K (meaning a dataset for Space TArget Recognition with more than 24,000 images) for detecting common targets in space. Since there is currently no publicly available dataset for space target detection, we extracted some pictures from a series of channels such as pictures and videos released by the official websites of NASA (National Aeronautics and Space Administration) and ESA (The European Space Agency) and expanded them to 24,451 pictures. We evaluate popular object detection algorithms to build a benchmark. Our STAR-24K dataset is publicly available at https://github.com/Zzz-zcy/STAR-24K.

AI Model-Based Automated Data Cleaning for Reliable Autonomous Driving Image Datasets (자율주행 영상데이터의 신뢰도 향상을 위한 AI모델 기반 데이터 자동 정제)

  • Kana Kim;Hakil Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.302-313
    • /
    • 2023
  • This paper aims to develop a framework that can fully automate the quality management of training data used in large-scale Artificial Intelligence (AI) models built by the Ministry of Science and ICT (MSIT) in the 'AI Hub Data Dam' project, which has invested more than 1 trillion won since 2017. Autonomous driving technology using AI has achieved excellent performance through many studies, but it requires a large amount of high-quality data to train the model. Moreover, it is still difficult for humans to directly inspect the processed data and prove it is valid, and a model trained with erroneous data can cause fatal problems in real life. This paper presents a dataset reconstruction framework that removes abnormal data from the constructed dataset and introduces strategies to improve the performance of AI models by reconstructing them into a reliable dataset to increase the efficiency of model training. The framework's validity was verified through an experiment on the autonomous driving dataset published through the AI Hub of the National Information Society Agency (NIA). As a result, it was confirmed that it could be rebuilt as a reliable dataset from which abnormal data has been removed.

Development of Dataset Evaluation Criteria for Learning Deepfake Video (딥페이크 영상 학습을 위한 데이터셋 평가기준 개발)

  • Kim, Rayng-Hyung;Kim, Tae-Gu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.4
    • /
    • pp.193-207
    • /
    • 2021
  • As Deepfakes phenomenon is spreading worldwide mainly through videos in web platforms and it is urgent to address the issue on time. More recently, researchers have extensively discussed deepfake video datasets. However, it has been pointed out that the existing Deepfake datasets do not properly reflect the potential threat and realism due to various limitations. Although there is a need for research that establishes an agreed-upon concept for high-quality datasets or suggests evaluation criterion, there are still handful studies which examined it to-date. Therefore, this study focused on the development of the evaluation criterion for the Deepfake video dataset. In this study, the fitness of the Deepfake dataset was presented and evaluation criterions were derived through the review of previous studies. AHP structuralization and analysis were performed to advance the evaluation criterion. The results showed that Facial Expression, Validation, and Data Characteristics are important determinants of data quality. This is interpreted as a result that reflects the importance of minimizing defects and presenting results based on scientific methods when evaluating quality. This study has implications in that it suggests the fitness and evaluation criterion of the Deepfake dataset. Since the evaluation criterion presented in this study was derived based on the items considered in previous studies, it is thought that all evaluation criterions will be effective for quality improvement. It is also expected to be used as criteria for selecting an appropriate deefake dataset or as a reference for designing a Deepfake data benchmark. This study could not apply the presented evaluation criterion to existing Deepfake datasets. In future research, the proposed evaluation criterion will be applied to existing datasets to evaluate the strengths and weaknesses of each dataset, and to consider what implications there will be when used in Deepfake research.

A Study on Data Adjustment and Quality Enhancement Method for Public Administrative Dataset Records in the Transfer Process-Based on the Experiences of Datawarehouses' ETT (행정정보 데이터세트 기록 이관 시 데이터 보정 및 품질 개선 방법 연구 - 데이터웨어하우스 ETT 경험을 기반으로)

  • Yim, Jin-Hee;Cho, Eun-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.25
    • /
    • pp.91-129
    • /
    • 2010
  • As it grows more heavily reliant on information system, researchers seek for various ways to manage and utilize of dataset records which is accumulated in public information system. It might be needed to adjust date and enhance the quality of public administrative dataset records during transferring to archive system or sharing server. The purpose of this paper is presenting data adjustment and quality enhancement methods for public administrative dataset records, and it refers to ETT procedure and method of construction of datawarehouses. It suggests seven typical examples and processing method of data adjustment and quality enhancement, which are (1) verification of quantity and data domain (2) code conversion for a consistent code value (3) making component with combinded information (4) making a decision of precision of date data (5) standardization of data (6) comment information about code value (7) capturing of metadata. It should be reviewed during dataset record transfer. This paper made Data adjustment and quality enhancement requirements for dataset record transfer, and it could be used as data quality requirement of administrative information system which produces dataset.

A Study on Data Quality Evaluation of Administrative Information Dataset (행정정보데이터세트의 데이터 품질평가 연구)

  • Song, Chiho;Yim, Jinhee
    • The Korean Journal of Archival Studies
    • /
    • no.71
    • /
    • pp.237-272
    • /
    • 2022
  • In 2019, the pilot project to establish a record management system for administrative information datasets started in earnest under the leadership of the National Archives. Based on the results of the three-year project by 2021, the improved administrative information dataset management plan will be reflected in public records-related laws and guidelines. Through this, the administrative information dataset becomes the target of full-scale public record management. Although public records have been converted to electronic documents and even the datasets of administrative information systems have been included in full-scale public records management, research on the quality requirements of data itself as raw data constituting records is still lacking. If data quality is not guaranteed, all four properties of records will be threatened in the dataset, which is a structure of data and an aggregate of records. Moreover, if the reliability of the quality of the data of the administrative information system built by reflecting the various needs of the working departments of the institution without considering the standards of the standard records management system is insufficient, the reliability of the public records itself can not be secured. This study is based on the administrative information dataset management plan presented in the "Administrative Information Dataset Recorded Information Service and Utilization Model Study" conducted by the National Archives of Korea in 2021. A study was conducted. By referring to various data, especially public data-related policies and guides, which are being promoted across the government, we would like to derive quality evaluation requirements in terms of records management and present specific indicators. Through this, it is expected that it will be helpful for record management of administrative information dataset which will be in full swing in the future.