• Title/Summary/Keyword: Dataset Quality

Search Result 414, Processing Time 0.029 seconds

Damage Detection and Damage Quantification of Temporary works Equipment based on Explainable Artificial Intelligence (XAI)

  • Cheolhee Lee;Taehoe Koo;Namwook Park;Nakhoon Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.11-19
    • /
    • 2024
  • This paper was studied abouta technology for detecting damage to temporary works equipment used in construction sites with explainable artificial intelligence (XAI). Temporary works equipment is mostly composed of steel or aluminum, and it is reused several times due to the characters of the materials in temporary works equipment. However, it sometimes causes accidents at construction sites by using low or decreased quality of temporary works equipment because the regulation and restriction of reuse in them is not strict. Currently, safety rules such as related government laws, standards, and regulations for quality control of temporary works equipment have not been established. Additionally, the inspection results were often different according to the inspector's level of training. To overcome these limitations, a method based with AI and image processing technology was developed. In addition, it was devised by applying explainableartificial intelligence (XAI) technology so that the inspector makes more exact decision with resultsin damage detect with image analysis by the XAI which is a developed AI model for analysis of temporary works equipment. In the experiments, temporary works equipment was photographed with a 4k-quality camera, and the learned artificial intelligence model was trained with 610 labelingdata, and the accuracy was tested by analyzing the image recording data of temporary works equipment. As a result, the accuracy of damage detect by the XAI was 95.0% for the training dataset, 92.0% for the validation dataset, and 90.0% for the test dataset. This was shown aboutthe reliability of the performance of the developed artificial intelligence. It was verified for usability of explainable artificial intelligence to detect damage in temporary works equipment by the experiments. However, to improve the level of commercial software, the XAI need to be trained more by real data set and the ability to detect damage has to be kept or increased when the real data set is applied.

KOMUChat: Korean Online Community Dialogue Dataset for AI Learning (KOMUChat : 인공지능 학습을 위한 온라인 커뮤니티 대화 데이터셋 연구)

  • YongSang Yoo;MinHwa Jung;SeungMin Lee;Min Song
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.219-240
    • /
    • 2023
  • Conversational AI which allows users to interact with satisfaction is a long-standing research topic. To develop conversational AI, it is necessary to build training data that reflects real conversations between people, but current Korean datasets are not in question-answer format or use honorifics, making it difficult for users to feel closeness. In this paper, we propose a conversation dataset (KOMUChat) consisting of 30,767 question-answer sentence pairs collected from online communities. The question-answer pairs were collected from post titles and first comments of love and relationship counsel boards used by men and women. In addition, we removed abuse records through automatic and manual cleansing to build high quality dataset. To verify the validity of KOMUChat, we compared and analyzed the result of generative language model learning KOMUChat and benchmark dataset. The results showed that our dataset outperformed the benchmark dataset in terms of answer appropriateness, user satisfaction, and fulfillment of conversational AI goals. The dataset is the largest open-source single turn text data presented so far and it has the significance of building a more friendly Korean dataset by reflecting the text styles of the online community.

Summary on Internet Communication Network Quality Characteristics Using Beta Probability Distribution (베타 확률분포를 이용한 인터넷통신 네트워크 품질특성 요약)

  • Park Sung-Min
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.05a
    • /
    • pp.1661-1662
    • /
    • 2006
  • Internet communication network quality characteristics are analyzed using Beta probability distribution. Beta probability distribution is chosen for the underlying probability distribution because it is an extremely flexible probability distribution used to model bounded random variables. Based on the fitted Beta probability distribution, a dataset regarding each network quality characteristic is summarized concisely.

  • PDF

A HIERARCHICAL APPROACH TO HIGH-RESOLUTION HYPERSPECTRAL IMAGE CLASSIFICATION OF LITTLE MIAMI RIVER WATERSHED FOR ENVIRONMENTAL MODELING

  • Heo, Joon;Troyer, Michael;Lee, Jung-Bin;Kim, Woo-Sun
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.647-650
    • /
    • 2006
  • Compact Airborne Spectrographic Imager (CASI) hyperspectral imagery was acquired over the Little Miami River Watershed (1756 square miles) in Ohio, U.S.A., which is one of the largest hyperspectral image acquisition. For the development of a 4m-resolution land cover dataset, a hierarchical approach was employed using two different classification algorithms: 'Image Object Segmentation' for level-1 and 'Spectral Angle Mapper' for level-2. This classification scheme was developed to overcome the spectral inseparability of urban and rural features and to deal with radiometric distortions due to cross-track illumination. The land cover class members were lentic, lotic, forest, corn, soybean, wheat, dry herbaceous, grass, urban barren, rural barren, urban/built, and unclassified. The final phase of processing was completed after an extensive Quality Assurance and Quality Control (QA/QC) phase. With respect to the eleven land cover class members, the overall accuracy with a total of 902 reference points was 83.9% at 4m resolution. The dataset is available for public research, and applications of this product will represent an improvement over more commonly utilized data of coarser spatial resolution such as National Land Cover Data (NLCD).

  • PDF

Building a Korean Text Summarization Dataset Using News Articles of Social Media (신문기사와 소셜 미디어를 활용한 한국어 문서요약 데이터 구축)

  • Lee, Gyoung Ho;Park, Yo-Han;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.8
    • /
    • pp.251-258
    • /
    • 2020
  • A training dataset for text summarization consists of pairs of a document and its summary. As conventional approaches to building text summarization dataset are human labor intensive, it is not easy to construct large datasets for text summarization. A collection of news articles is one of the most popular resources for text summarization because it is easily accessible, large-scale and high-quality text. From social media news services, we can collect not only headlines and subheads of news articles but also summary descriptions that human editors write about the news articles. Approximately 425,000 pairs of news articles and their summaries are collected from social media. We implemented an automatic extractive summarizer and trained it on the dataset. The performance of the summarizer is compared with unsupervised models. The summarizer achieved better results than unsupervised models in terms of ROUGE score.

Aerial Dataset Integration For Vehicle Detection Based on YOLOv4

  • Omar, Wael;Oh, Youngon;Chung, Jinwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.747-761
    • /
    • 2021
  • With the increasing application of UAVs in intelligent transportation systems, vehicle detection for aerial images has become an essential engineering technology and has academic research significance. In this paper, a vehicle detection method for aerial images based on the YOLOv4 deep learning algorithm is presented. At present, the most known datasets are VOC (The PASCAL Visual Object Classes Challenge), ImageNet, and COCO (Microsoft Common Objects in Context), which comply with the vehicle detection from UAV. An integrated dataset not only reflects its quantity and photo quality but also its diversity which affects the detection accuracy. The method integrates three public aerial image datasets VAID, UAVD, DOTA suitable for YOLOv4. The training model presents good test results especially for small objects, rotating objects, as well as compact and dense objects, and meets the real-time detection requirements. For future work, we will integrate one more aerial image dataset acquired by our lab to increase the number and diversity of training samples, at the same time, while meeting the real-time requirements.

Human Detection using Real-virtual Augmented Dataset

  • Jongmin, Lee;Yongwan, Kim;Jinsung, Choi;Ki-Hong, Kim;Daehwan, Kim
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.98-102
    • /
    • 2023
  • This paper presents a study on how augmenting semi-synthetic image data improves the performance of human detection algorithms. In the field of object detection, securing a high-quality data set plays the most important role in training deep learning algorithms. Recently, the acquisition of real image data has become time consuming and expensive; therefore, research using synthesized data has been conducted. Synthetic data haves the advantage of being able to generate a vast amount of data and accurately label it. However, the utility of synthetic data in human detection has not yet been demonstrated. Therefore, we use You Only Look Once (YOLO), the object detection algorithm most commonly used, to experimentally analyze the effect of synthetic data augmentation on human detection performance. As a result of training YOLO using the Penn-Fudan dataset, it was shown that the YOLO network model trained on a dataset augmented with synthetic data provided high-performance results in terms of the Precision-Recall Curve and F1-Confidence Curve.

Deep Learning Models for Fabric Image Defect Detection: Experiments with Transformer-based Image Segmentation Models (직물 이미지 결함 탐지를 위한 딥러닝 기술 연구: 트랜스포머 기반 이미지 세그멘테이션 모델 실험)

  • Lee, Hyun Sang;Ha, Sung Ho;Oh, Se Hwan
    • The Journal of Information Systems
    • /
    • v.32 no.4
    • /
    • pp.149-162
    • /
    • 2023
  • Purpose In the textile industry, fabric defects significantly impact product quality and consumer satisfaction. This research seeks to enhance defect detection by developing a transformer-based deep learning image segmentation model for learning high-dimensional image features, overcoming the limitations of traditional image classification methods. Design/methodology/approach This study utilizes the ZJU-Leaper dataset to develop a model for detecting defects in fabrics. The ZJU-Leaper dataset includes defects such as presses, stains, warps, and scratches across various fabric patterns. The dataset was built using the defect labeling and image files from ZJU-Leaper, and experiments were conducted with deep learning image segmentation models including Deeplabv3, SegformerB0, SegformerB1, and Dinov2. Findings The experimental results of this study indicate that the SegformerB1 model achieved the highest performance with an mIOU of 83.61% and a Pixel F1 Score of 81.84%. The SegformerB1 model excelled in sensitivity for detecting fabric defect areas compared to other models. Detailed analysis of its inferences showed accurate predictions of diverse defects, such as stains and fine scratches, within intricated fabric designs.

A Study on the Dataset Construction and Model Application for Detecting Surgical Gauze in C-Arm Imaging Using Artificial Intelligence (인공지능을 활용한 C-Arm에서 수술용 거즈 검출을 위한 데이터셋 구축 및 검출모델 적용에 관한 연구)

  • Kim, Jin Yeop;Hwang, Ho Seong;Lee, Joo Byung;Choi, Yong Jin;Lee, Kang Seok;Kim, Ho Chul
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.290-297
    • /
    • 2022
  • During surgery, Surgical instruments are often left behind due to accidents. Most of these are surgical gauze, so radioactive non-permeable gauze (X-ray gauze) is used for preventing of accidents which gauze is left in the body. This gauze is divided into wire and pad type. If it is confirmed that the gauze remains in the body, gauze must be detected by radiologist's reading by imaging using a mobile X-ray device. But most of operating rooms are not equipped with a mobile X-ray device, but equipped C-Arm equipment, which is of poorer quality than mobile X-ray equipment and furthermore it takes time to read them. In this study, Use C-Arm equipment to acquire gauze image for detection and Build dataset using artificial intelligence and select a detection model to Assist with the relatively low image quality and the reading of radiology specialists. mAP@50 and detection time are used as indicators for performance evaluation. The result is that two-class gauze detection dataset is more accurate and YOLOv5 model mAP@50 is 93.4% and detection time is 11.7 ms.

A streamlined pipeline based on HmmUFOtu for microbial community profiling using 16S rRNA amplicon sequencing

  • Hyeonwoo Kim;Jiwon Kim;Ji Won Cho;Kwang-Sung Ahn;Dong-Il Park;Sangsoo Kim
    • Genomics & Informatics
    • /
    • v.21 no.3
    • /
    • pp.40.1-40.11
    • /
    • 2023
  • Microbial community profiling using 16S rRNA amplicon sequencing allows for taxonomic characterization of diverse microorganisms. While amplicon sequence variant (ASV) methods are increasingly favored for their fine-grained resolution of sequence variants, they often discard substantial portions of sequencing reads during quality control, particularly in datasets with large number samples. We present a streamlined pipeline that integrates FastP for read trimming, HmmUFOtu for operational taxonomic units (OTU) clustering, Vsearch for chimera checking, and Kraken2 for taxonomic assignment. To assess the pipeline's performance, we reprocessed two published stool datasets of normal Korean populations: one with 890 and the other with 1,462 independent samples. In the first dataset, HmmUFOtu retained 93.2% of over 104 million read pairs after quality trimming, discarding chimeric or unclassifiable reads, while DADA2, a commonly used ASV method, retained only 44.6% of the reads. Nonetheless, both methods yielded qualitatively similar β-diversity plots. For the second dataset, HmmUFOtu retained 89.2% of read pairs, while DADA2 retained a mere 18.4% of the reads. HmmUFOtu, being a closed-reference clustering method, facilitates merging separately processed datasets, with shared OTUs between the two datasets exhibiting a correlation coefficient of 0.92 in total abundance (log scale). While the first two dimensions of the β-diversity plot exhibited a cohesive mixture of the two datasets, the third dimension revealed the presence of a batch effect. Our comparative evaluation of ASV and OTU methods within this streamlined pipeline provides valuable insights into their performance when processing large-scale microbial 16S rRNA amplicon sequencing data. The strengths of HmmUFOtu and its potential for dataset merging are highlighted.