• Title/Summary/Keyword: Deep Learning Dataset

Search Result 796, Processing Time 0.025 seconds

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

Automatic Dataset Generation of Object Detection and Instance Segmentation using Mask R-CNN (Mask R-CNN을 이용한 물체인식 및 개체분할의 학습 데이터셋 자동 생성)

  • Jo, HyunJun;Kim, Dawit;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.31-39
    • /
    • 2019
  • A robot usually adopts ANN (artificial neural network)-based object detection and instance segmentation algorithms to recognize objects but creating datasets for these algorithms requires high labeling costs because the dataset should be manually labeled. In order to lower the labeling cost, a new scheme is proposed that can automatically generate a training images and label them for specific objects. This scheme uses an instance segmentation algorithm trained to give the masks of unknown objects, so that they can be obtained in a simple environment. The RGB images of objects can be obtained by using these masks, and it is necessary to label the classes of objects through a human supervision. After obtaining object images, they are synthesized with various background images to create new images. Labeling the synthesized images is performed automatically using the masks and previously input object classes. In addition, human intervention is further reduced by using the robot arm to collect object images. The experiments show that the performance of instance segmentation trained through the proposed method is equivalent to that of the real dataset and that the time required to generate the dataset can be significantly reduced.

Real-world multimodal lifelog dataset for human behavior study

  • Chung, Seungeun;Jeong, Chi Yoon;Lim, Jeong Mook;Lim, Jiyoun;Noh, Kyoung Ju;Kim, Gague;Jeong, Hyuntae
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.426-437
    • /
    • 2022
  • To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real-world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long-term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network-based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.

A Study on Model for Drivable Area Segmentation based on Deep Learning (딥러닝 기반의 주행가능 영역 추출 모델에 관한 연구)

  • Jeon, Hyo-jin;Cho, Soo-sun
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.105-111
    • /
    • 2019
  • Core technologies that lead the Fourth Industrial Revolution era, such as artificial intelligence, big data, and autonomous driving, are implemented and serviced through the rapid development of computing power and hyper-connected networks based on the Internet of Things. In this paper, we implement two different models for drivable area segmentation in various environment, and propose a better model by comparing the results. The models for drivable area segmentation are using DeepLab V3+ and Mask R-CNN, which have great performances in the field of image segmentation and are used in many studies in autonomous driving technology. For driving information in various environment, we use BDD dataset which provides driving videos and images in various weather conditions and day&night time. The result of two different models shows that Mask R-CNN has higher performance with 68.33% IoU than DeepLab V3+ with 48.97% IoU. In addition, the result of visual inspection of drivable area segmentation on driving image, the accuracy of Mask R-CNN is 83% and DeepLab V3+ is 69%. It indicates Mask R-CNN is more efficient than DeepLab V3+ in drivable area segmentation.

Detection of Plastic Greenhouses by Using Deep Learning Model for Aerial Orthoimages (딥러닝 모델을 이용한 항공정사영상의 비닐하우스 탐지)

  • Byunghyun Yoon;Seonkyeong Seong;Jaewan Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.183-192
    • /
    • 2023
  • The remotely sensed data, such as satellite imagery and aerial photos, can be used to extract and detect some objects in the image through image interpretation and processing techniques. Significantly, the possibility for utilizing digital map updating and land monitoring has been increased through automatic object detection since spatial resolution of remotely sensed data has improved and technologies about deep learning have been developed. In this paper, we tried to extract plastic greenhouses into aerial orthophotos by using fully convolutional densely connected convolutional network (FC-DenseNet), one of the representative deep learning models for semantic segmentation. Then, a quantitative analysis of extraction results had performed. Using the farm map of the Ministry of Agriculture, Food and Rural Affairsin Korea, training data was generated by labeling plastic greenhouses into Damyang and Miryang areas. And then, FC-DenseNet was trained through a training dataset. To apply the deep learning model in the remotely sensed imagery, instance norm, which can maintain the spectral characteristics of bands, was used as normalization. In addition, optimal weights for each band were determined by adding attention modules in the deep learning model. In the experiments, it was found that a deep learning model can extract plastic greenhouses. These results can be applied to digital map updating of Farm-map and landcover maps.

Cell Images Classification using Deep Convolutional Autoencoder of Unsupervised Learning (비지도학습의 딥 컨벌루셔널 자동 인코더를 이용한 셀 이미지 분류)

  • Vununu, Caleb;Park, Jin-Hyeok;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.942-943
    • /
    • 2021
  • The present work proposes a classification system for the HEp-2 cell images using an unsupervised deep feature learning method. Unlike most of the state-of-the-art methods in the literature that utilize deep learning in a strictly supervised way, we propose here the use of the deep convolutional autoencoder (DCAE) as the principal feature extractor for classifying the different types of the HEp-2 cell images. The network takes the original cell images as the inputs and learns to reconstruct them in order to capture the features related to the global shape of the cells. A final feature vector is constructed by using the latent representations extracted from the DCAE, giving a highly discriminative feature representation. The created features will be fed to a nonlinear classifier whose output will represent the final type of the cell image. We have tested the discriminability of the proposed features on one of the most popular HEp-2 cell classification datasets, the SNPHEp-2 dataset and the results show that the proposed features manage to capture the distinctive characteristics of the different cell types while performing at least as well as the actual deep learning based state-of-the-art methods.

Convolutional Neural Networks for Character-level Classification

  • Ko, Dae-Gun;Song, Su-Han;Kang, Ki-Min;Han, Seong-Wook
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.1
    • /
    • pp.53-59
    • /
    • 2017
  • Optical character recognition (OCR) automatically recognizes text in an image. OCR is still a challenging problem in computer vision. A successful solution to OCR has important device applications, such as text-to-speech conversion and automatic document classification. In this work, we analyze character recognition performance using the current state-of-the-art deep-learning structures. One is the AlexNet structure, another is the LeNet structure, and the other one is the SPNet structure. For this, we have built our own dataset that contains digits and upper- and lower-case characters. We experiment in the presence of salt-and-pepper noise or Gaussian noise, and report the performance comparison in terms of recognition error. Experimental results indicate by five-fold cross-validation that the SPNet structure (our approach) outperforms AlexNet and LeNet in recognition error.

Semiconductor Process Inspection Using Mask R-CNN (Mask R-CNN을 활용한 반도체 공정 검사)

  • Han, Jung Hee;Hong, Sung Soo
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.3
    • /
    • pp.12-18
    • /
    • 2020
  • In semiconductor manufacturing, defect detection is critical to maintain high yield. Currently, computer vision systems used in semiconductor photo lithography still have adopt to digital image processing algorithm, which often occur inspection faults due to sensitivity to external environment. Thus, we intend to handle this problem by means of using Mask R-CNN instead of digital image processing algorithm. Additionally, Mask R-CNN can be trained with image dataset pre-processed by means of the specific designed digital image filter to extract the enhanced feature map of Convolutional Neural Network (CNN). Our approach converged advantage of digital image processing and instance segmentation with deep learning yields more efficient semiconductor photo lithography inspection system than conventional system.

GAN based Data Augmentation of Channel Data for the Application of RF Finger-printing in NFC (NFC에서 무선 핑거프린팅 기술 적용을 위한 GAN 기반 채널데이터 증강방안)

  • Lee, Woongsup
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.9
    • /
    • pp.1271-1274
    • /
    • 2021
  • RF fingerprinting based on deep learning (DL) has gained interests as a means to improve the security of near field communication (NFC) by allowing identification of NFC tags based on unique physical characteristics. To achieve high accuracy in the identification of NFC tags, it is crucial to utilize a large number of training data, however it is hard to collect such dataset in practice. In this study, we have provided new methodology to generate RF waveform from NFC tags, i.e., data augmentation, based on a conditional generative adversarial network (CGAN). By using the RF waveform of NFC tags which is collected from the testbed with software defined radio (SDR), we have confirmed that the realistic RF waveform can be generated through our proposed scheme.

A Study on Security Event Detection in ESM Using Big Data and Deep Learning

  • Lee, Hye-Min;Lee, Sang-Joon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.42-49
    • /
    • 2021
  • As cyber attacks become more intelligent, there is difficulty in detecting advanced attacks in various fields such as industry, defense, and medical care. IPS (Intrusion Prevention System), etc., but the need for centralized integrated management of each security system is increasing. In this paper, we collect big data for intrusion detection and build an intrusion detection platform using deep learning and CNN (Convolutional Neural Networks). In this paper, we design an intelligent big data platform that collects data by observing and analyzing user visit logs and linking with big data. We want to collect big data for intrusion detection and build an intrusion detection platform based on CNN model. In this study, we evaluated the performance of the Intrusion Detection System (IDS) using the KDD99 dataset developed by DARPA in 1998, and the actual attack categories were tested with KDD99's DoS, U2R, and R2L using four probing methods.