• 제목/요약/키워드: Benchmark dataset

검색결과 100건 처리시간 0.021초

Generation of Super-Resolution Benchmark Dataset for Compact Advanced Satellite 500 Imagery and Proof of Concept Results

  • Yonghyun Kim;Jisang Park;Daesub Yoon
    • 대한원격탐사학회지
    • /
    • 제39권4호
    • /
    • pp.459-466
    • /
    • 2023
  • In the last decade, artificial intelligence's dramatic advancement with the development of various deep learning techniques has significantly contributed to remote sensing fields and satellite image applications. Among many prominent areas, super-resolution research has seen substantial growth with the release of several benchmark datasets and the rise of generative adversarial network-based studies. However, most previously published remote sensing benchmark datasets represent spatial resolution within approximately 10 meters, imposing limitations when directly applying for super-resolution of small objects with cm unit spatial resolution. Furthermore, if the dataset lacks a global spatial distribution and is specialized in particular land covers, the consequent lack of feature diversity can directly impact the quantitative performance and prevent the formation of robust foundation models. To overcome these issues, this paper proposes a method to generate benchmark datasets by simulating the modulation transfer functions of the sensor. The proposed approach leverages the simulation method with a solid theoretical foundation, notably recognized in image fusion. Additionally, the generated benchmark dataset is applied to state-of-the-art super-resolution base models for quantitative and visual analysis and discusses the shortcomings of the existing datasets. Through these efforts, we anticipate that the proposed benchmark dataset will facilitate various super-resolution research shortly in Korea.

딥러닝 기반 영상 주행기록계와 단안 깊이 추정 및 기술을 위한 벤치마크 (Benchmark for Deep Learning based Visual Odometry and Monocular Depth Estimation)

  • 최혁두
    • 로봇학회논문지
    • /
    • 제14권2호
    • /
    • pp.114-121
    • /
    • 2019
  • This paper presents a new benchmark system for visual odometry (VO) and monocular depth estimation (MDE). As deep learning has become a key technology in computer vision, many researchers are trying to apply deep learning to VO and MDE. Just a couple of years ago, they were independently studied in a supervised way, but now they are coupled and trained together in an unsupervised way. However, before designing fancy models and losses, we have to customize datasets to use them for training and testing. After training, the model has to be compared with the existing models, which is also a huge burden. The benchmark provides input dataset ready-to-use for VO and MDE research in 'tfrecords' format and output dataset that includes model checkpoints and inference results of the existing models. It also provides various tools for data formatting, training, and evaluation. In the experiments, the exsiting models were evaluated to verify their performances presented in the corresponding papers and we found that the evaluation result is inferior to the presented performances.

BEGINNER'S GUIDE TO NEURAL NETWORKS FOR THE MNIST DATASET USING MATLAB

  • Kim, Bitna;Park, Young Ho
    • Korean Journal of Mathematics
    • /
    • 제26권2호
    • /
    • pp.337-348
    • /
    • 2018
  • MNIST dataset is a database containing images of handwritten digits, with each image labeled by an integer from 0 to 9. It is used to benchmark the performance of machine learning algorithms. Neural networks for MNIST are regarded as the starting point of the studying machine learning algorithms. However it is not easy to start the actual programming. In this expository article, we will give a step-by-step instruction to build neural networks for MNIST dataset using MATLAB.

WebSHArk 1.0: A Benchmark Collection for Malicious Web Shell Detection

  • Kim, Jinsuk;Yoo, Dong-Hoon;Jang, Heejin;Jeong, Kimoon
    • Journal of Information Processing Systems
    • /
    • 제11권2호
    • /
    • pp.229-238
    • /
    • 2015
  • Web shells are programs that are written for a specific purpose in Web scripting languages, such as PHP, ASP, ASP.NET, JSP, PERL-CGI, etc. Web shells provide a means to communicate with the server's operating system via the interpreter of the web scripting languages. Hence, web shells can execute OS specific commands over HTTP. Usually, web attacks by malicious users are made by uploading one of these web shells to compromise the target web servers. Though there have been several approaches to detect such malicious web shells, no standard dataset has been built to compare various web shell detection techniques. In this paper, we present a collection of web shell files, WebSHArk 1.0, as a standard dataset for current and future studies in malicious web shell detection. To provide baseline results for future studies and for the improvement of current tools, we also present some benchmark results by scanning the WebSHArk dataset directory with three web shell scanning tools that are publicly available on the Internet. The WebSHArk 1.0 dataset is only available upon request via email to one of the authors, due to security and legal issues.

Comparison of Fine-Tuned Convolutional Neural Networks for Clipart Style Classification

  • Lee, Seungbin;Kim, Hyungon;Seok, Hyekyoung;Nang, Jongho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제9권4호
    • /
    • pp.1-7
    • /
    • 2017
  • Clipart is artificial visual contents that are created using various tools such as Illustrator to highlight some information. Here, the style of the clipart plays a critical role in determining how it looks. However, previous studies on clipart are focused only on the object recognition [16], segmentation, and retrieval of clipart images using hand-craft image features. Recently, some clipart classification researches based on the style similarity using CNN have been proposed, however, they have used different CNN-models and experimented with different benchmark dataset so that it is very hard to compare their performances. This paper presents an experimental analysis of the clipart classification based on the style similarity with two well-known CNN-models (Inception Resnet V2 [13] and VGG-16 [14] and transfers learning with the same benchmark dataset (Microsoft Style Dataset 3.6K). From this experiment, we find out that the accuracy of Inception Resnet V2 is better than VGG for clipart style classification because of its deep nature and convolution map with various sizes in parallel. We also find out that the end-to-end training can improve the accuracy more than 20% in both CNN models.

KOMUChat : 인공지능 학습을 위한 온라인 커뮤니티 대화 데이터셋 연구 (KOMUChat: Korean Online Community Dialogue Dataset for AI Learning)

  • 유용상;정민화;이승민;송민
    • 지능정보연구
    • /
    • 제29권2호
    • /
    • pp.219-240
    • /
    • 2023
  • 사용자가 만족감을 느끼며 상호작용할 수 있는 대화형 인공지능을 개발하기 위한 노력이 이어지고 있다. 대화형 인공지능 개발을 위해서는 사람들의 실제 대화를 반영한 학습 데이터를 구축하는 것이 필요하지만, 기존 데이터셋은 질문-답변 형식이 아니거나 존대어를 사용하여 사용자가 친근감을 느끼기 어려운 문체로 구성되어 있다. 이에 본 논문은 온라인 커뮤니티에서 수집한 30,767개의 질문-답변 문장 쌍으로 구성된 대화 데이터셋(KOMUChat)을 구축하여 제안한다. 본 데이터셋은 각각 남성, 여성이 주로 이용하는 연애상담 게시판의 게시물 제목과 첫 번째 댓글을 질문-답변으로 수집하였다. 또한, 자동 및 수동 정제 과정을 통해 혐오 데이터 등을 제거하여 양질의 데이터셋을 구축하였다. KOMUChat의 타당성을 검증하기 위해 언어 모델에 본 데이터셋과 벤치마크 데이터셋을 각각 학습시켜 비교분석하였다. 그 결과 답변의 적절성, 사용자의 만족감, 대화형 인공지능의 목적 달성 여부에서 KOMUChat이 벤치마크 데이터셋의 평가 점수를 상회했다. 본 연구는 지금까지 제시된 오픈소스 싱글턴 대화형 텍스트 데이터셋 중 가장 대규모의 데이터이며 커뮤니티 별 텍스트 특성을 반영하여 보다 친근감있는 한국어 데이터셋을 구축하였다는 의의를 가진다.

OryzaGP: rice gene and protein dataset for named-entity recognition

  • Larmande, Pierre;Do, Huy;Wang, Yue
    • Genomics & Informatics
    • /
    • 제17권2호
    • /
    • pp.17.1-17.3
    • /
    • 2019
  • Text mining has become an important research method in biology, with its original purpose to extract biological entities, such as genes, proteins and phenotypic traits, to extend knowledge from scientific papers. However, few thorough studies on text mining and application development, for plant molecular biology data, have been performed, especially for rice, resulting in a lack of datasets available to solve named-entity recognition tasks for this species. Since there are rare benchmarks available for rice, we faced various difficulties in exploiting advanced machine learning methods for accurate analysis of the rice literature. To evaluate several approaches to automatically extract information from gene/protein entities, we built a new dataset for rice as a benchmark. This dataset is composed of a set of titles and abstracts, extracted from scientific papers focusing on the rice species, and is downloaded from PubMed. During the 5th Biomedical Linked Annotation Hackathon, a portion of the dataset was uploaded to PubAnnotation for sharing. Our ultimate goal is to offer a shared task of rice gene/protein name recognition through the BioNLP Open Shared Tasks framework using the dataset, to facilitate an open comparison and evaluation of different approaches to the task.

Convolutional Neural Network with Particle Filter Approach for Visual Tracking

  • Tyan, Vladimir;Kim, Doohyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권2호
    • /
    • pp.693-709
    • /
    • 2018
  • In this paper, we propose a compact Convolutional Neural Network (CNN)-based tracker in conjunction with a particle filter architecture, in which the CNN model operates as an accurate candidates estimator, while the particle filter predicts the target motion dynamics, lowering the overall number of calculations and refines the resulting target bounding box. Experiments were conducted on the Online Object Tracking Benchmark (OTB) [34] dataset and comparison analysis in respect to other state-of-art has been performed based on accuracy and precision, indicating that the proposed algorithm outperforms all state-of-the-art trackers included in the OTB dataset, specifically, TLD [16], MIL [1], SCM [36] and ASLA [15]. Also, a comprehensive speed performance analysis showed average frames per second (FPS) among the top-10 trackers from the OTB dataset [34].

딥러닝 기반의 투명 렌즈 이상 탐지 알고리즘 성능 비교 및 적용 (Comparison and Application of Deep Learning-Based Anomaly Detection Algorithms for Transparent Lens Defects)

  • 김한비;서대호
    • 산업경영시스템학회지
    • /
    • 제47권1호
    • /
    • pp.9-19
    • /
    • 2024
  • Deep learning-based computer vision anomaly detection algorithms are widely utilized in various fields. Especially in the manufacturing industry, the difficulty in collecting abnormal data compared to normal data, and the challenge of defining all potential abnormalities in advance, have led to an increasing demand for unsupervised learning methods that rely on normal data. In this study, we conducted a comparative analysis of deep learning-based unsupervised learning algorithms that define and detect abnormalities that can occur when transparent contact lenses are immersed in liquid solution. We validated and applied the unsupervised learning algorithms used in this study to the existing anomaly detection benchmark dataset, MvTecAD. The existing anomaly detection benchmark dataset primarily consists of solid objects, whereas in our study, we compared unsupervised learning-based algorithms in experiments judging the shape and presence of lenses submerged in liquid. Among the algorithms analyzed, EfficientAD showed an AUROC and F1-score of 0.97 in image-level tests. However, the F1-score decreased to 0.18 in pixel-level tests, making it challenging to determine the locations where abnormalities occurred. Despite this, EfficientAD demonstrated excellent performance in image-level tests classifying normal and abnormal instances, suggesting that with the collection and training of large-scale data in real industrial settings, it is expected to exhibit even better performance.

STAR-24K: A Public Dataset for Space Common Target Detection

  • Zhang, Chaoyan;Guo, Baolong;Liao, Nannan;Zhong, Qiuyun;Liu, Hengyan;Li, Cheng;Gong, Jianglei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권2호
    • /
    • pp.365-380
    • /
    • 2022
  • The target detection algorithm based on supervised learning is the current mainstream algorithm for target detection. A high-quality dataset is the prerequisite for the target detection algorithm to obtain good detection performance. The larger the number and quality of the dataset, the stronger the generalization ability of the model, that is, the dataset determines the upper limit of the model learning. The convolutional neural network optimizes the network parameters in a strong supervision method. The error is calculated by comparing the predicted frame with the manually labeled real frame, and then the error is passed into the network for continuous optimization. Strongly supervised learning mainly relies on a large number of images as models for continuous learning, so the number and quality of images directly affect the results of learning. This paper proposes a dataset STAR-24K (meaning a dataset for Space TArget Recognition with more than 24,000 images) for detecting common targets in space. Since there is currently no publicly available dataset for space target detection, we extracted some pictures from a series of channels such as pictures and videos released by the official websites of NASA (National Aeronautics and Space Administration) and ESA (The European Space Agency) and expanded them to 24,451 pictures. We evaluate popular object detection algorithms to build a benchmark. Our STAR-24K dataset is publicly available at https://github.com/Zzz-zcy/STAR-24K.