• Title/Summary/Keyword: Korean human dataset

Search Result 165, Processing Time 0.026 seconds

A Study of Facial Organs Classification System Based on Fusion of CNN Features and Haar-CNN Features

  • Hao, Biao;Lim, Hye-Youn;Kang, Dae-Seong
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.105-113
    • /
    • 2018
  • In this paper, we proposed a method for effective classification of eye, nose, and mouth of human face. Most recent image classification uses Convolutional Neural Network(CNN). However, the features extracted by CNN are not sufficient and the classification effect is not too high. We proposed a new algorithm to improve the classification effect. The proposed method can be roughly divided into three parts. First, the Haar feature extraction algorithm is used to construct the eye, nose, and mouth dataset of face. The second, the model extracts CNN features of image using AlexNet. Finally, Haar-CNN features are extracted by performing convolution after Haar feature extraction. After that, CNN features and Haar-CNN features are fused and classify images using softmax. Recognition rate using mixed features could be increased about 4% than CNN feature. Experiments have demonstrated the performance of the proposed algorithm.

Sinkhole Tracking by Deep Learning and Data Association (딥 러닝과 데이터 결합에 의한 싱크홀 트래킹)

  • Ro, Soonghwan;Hoai, Nam Vu;Choi, Bokgil;Dung, Nguyen Manh
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.6
    • /
    • pp.17-25
    • /
    • 2019
  • Accurate tracking of the sinkholes that are appearing frequently now is an important method of protecting human and property damage. Although many sinkhole detection systems have been proposed, it is still far from completely solved especially in-depth area. Furthermore, detection of sinkhole algorithms experienced the problem of unstable result that makes the system difficult to fire a warning in real-time. In this paper, we proposed a method of sinkhole tracking by deep learning and data association, that takes advantage of the recent development of CNN transfer learning. Our system consists of three main parts which are binary segmentation, sinkhole classification, and sinkhole tracking. The experiment results show that the sinkhole can be tracked in real-time on the dataset. These achievements have proven that the proposed system is able to apply to the practical application.

A Systems Engineering Approach for Predicting NPP Response under Steam Generator Tube Rupture Conditions using Machine Learning

  • Tran Canh Hai, Nguyen;Aya, Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.94-107
    • /
    • 2022
  • Accidents prevention and mitigation is the highest priority of nuclear power plant (NPP) operation, particularly in the aftermath of the Fukushima Daiichi accident, which has reignited public anxieties and skepticism regarding nuclear energy usage. To deal with accident scenarios more effectively, operators must have ample and precise information about key safety parameters as well as their future trajectories. This work investigates the potential of machine learning in forecasting NPP response in real-time to provide an additional validation method and help reduce human error, especially in accident situations where operators are under a lot of stress. First, a base-case SGTR simulation is carried out by the best-estimate code RELAP5/MOD3.4 to confirm the validity of the model against results reported in the APR1400 Design Control Document (DCD). Then, uncertainty quantification is performed by coupling RELAP5/MOD3.4 and the statistical tool DAKOTA to generate a large enough dataset for the construction and training of neural-based machine learning (ML) models, namely LSTM, GRU, and hybrid CNN-LSTM. Finally, the accuracy and reliability of these models in forecasting system response are tested by their performance on fresh data. To facilitate and oversee the process of developing the ML models, a Systems Engineering (SE) methodology is used to ensure that the work is consistently in line with the originating mission statement and that the findings obtained at each subsequent phase are valid.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

Clinical Validation of a Deep Learning-Based Hybrid (Greulich-Pyle and Modified Tanner-Whitehouse) Method for Bone Age Assessment

  • Kyu-Chong Lee;Kee-Hyoung Lee;Chang Ho Kang;Kyung-Sik Ahn;Lindsey Yoojin Chung;Jae-Joon Lee;Suk Joo Hong;Baek Hyun Kim;Euddeum Shim
    • Korean Journal of Radiology
    • /
    • v.22 no.12
    • /
    • pp.2017-2025
    • /
    • 2021
  • Objective: To evaluate the accuracy and clinical efficacy of a hybrid Greulich-Pyle (GP) and modified Tanner-Whitehouse (TW) artificial intelligence (AI) model for bone age assessment. Materials and Methods: A deep learning-based model was trained on an open dataset of multiple ethnicities. A total of 102 hand radiographs (51 male and 51 female; mean age ± standard deviation = 10.95 ± 2.37 years) from a single institution were selected for external validation. Three human experts performed bone age assessments based on the GP atlas to develop a reference standard. Two study radiologists performed bone age assessments with and without AI model assistance in two separate sessions, for which the reading time was recorded. The performance of the AI software was assessed by comparing the mean absolute difference between the AI-calculated bone age and the reference standard. The reading time was compared between reading with and without AI using a paired t test. Furthermore, the reliability between the two study radiologists' bone age assessments was assessed using intraclass correlation coefficients (ICCs), and the results were compared between reading with and without AI. Results: The bone ages assessed by the experts and the AI model were not significantly different (11.39 ± 2.74 years and 11.35 ± 2.76 years, respectively, p = 0.31). The mean absolute difference was 0.39 years (95% confidence interval, 0.33-0.45 years) between the automated AI assessment and the reference standard. The mean reading time of the two study radiologists was reduced from 54.29 to 35.37 seconds with AI model assistance (p < 0.001). The ICC of the two study radiologists slightly increased with AI model assistance (from 0.945 to 0.990). Conclusion: The proposed AI model was accurate for assessing bone age. Furthermore, this model appeared to enhance the clinical efficacy by reducing the reading time and improving the inter-observer reliability.

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

A New Hyper Parameter of Hounsfield Unit Range in Liver Segmentation

  • Kim, Kangjik;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.103-111
    • /
    • 2020
  • Liver cancer is the most fatal cancer that occurs worldwide. In order to diagnose liver cancer, the patient's physical condition was checked by using a CT technique using radiation. Segmentation was needed to diagnose the liver on the patient's abdominal CT scan, which the radiologists had to do manually, which caused tremendous time and human mistakes. In order to automate, researchers attempted segmentation using image segmentation algorithms in computer vision field, but it was still time-consuming because of the interactive based and the setting value. To reduce time and to get more accurate segmentation, researchers have begun to attempt to segment the liver in CT images using CNNs, which show significant performance in various computer vision fields. The pixel value, or numerical value, of the CT image is called the Hounsfield Unit (HU) value, which is a relative representation of the transmittance of radiation, and usually ranges from about -2000 to 2000. In general, deep learning researchers reduce or limit this range and use it for training to remove noise and focus on the target organ. Here, we observed that the range of HU values was limited in many studies but different in various liver segmentation studies, and assumed that performance could vary depending on the HU range. In this paper, we propose the possibility of considering HU value range as a hyper parameter. U-Net and ResUNet were used to compare and experiment with different HU range limit preprocessing of CHAOS dataset under limited conditions. As a result, it was confirmed that the results are different depending on the HU range. This proves that the range limiting the HU value itself can be a hyper parameter, which means that there are HU ranges that can provide optimal performance for various models.

Music Genre Classification using Spikegram and Deep Neural Network (스파이크그램과 심층 신경망을 이용한 음악 장르 분류)

  • Jang, Woo-Jin;Yun, Ho-Won;Shin, Seong-Hyeon;Cho, Hyo-Jin;Jang, Won;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.693-701
    • /
    • 2017
  • In this paper, we propose a new method for music genre classification using spikegram and deep neural network. The human auditory system encodes the input sound in the time and frequency domain in order to maximize the amount of sound information delivered to the brain using minimum energy and resource. Spikegram is a method of analyzing waveform based on the encoding function of auditory system. In the proposed method, we analyze the signal using spikegram and extract a feature vector composed of key information for the genre classification, which is to be used as the input to the neural network. We measure the performance of music genre classification using the GTZAN dataset consisting of 10 music genres, and confirm that the proposed method provides good performance using a low-dimensional feature vector, compared to the current state-of-the-art methods.

Virtual Environments for Medical Training: Soft tissue modeling (의료용 훈련을 위한 가상현실에 대한 연구)

  • Kim, Jung
    • Proceedings of the KSME Conference
    • /
    • 2007.05a
    • /
    • pp.372-377
    • /
    • 2007
  • For more than 2,500 years, surgical teaching has been based on the so called "see one, do one, teach one" paradigm, in which the surgical trainee learns by operating on patients under close supervision of peers and superiors. However, higher demands on the quality of patient care and rising malpractice costs have made it increasingly risky to train on patients. Minimally invasive surgery, in particular, has made it more difficult for an instructor to demonstrate the required manual skills. It has been recognized that, similar to flight simulators for pilots, virtual reality (VR) based surgical simulators promise a safer and more comprehensive way to train manual skills of medical personnel in general and surgeons in particular. One of the major challenges in the development of VR-based surgical trainers is the real-time and realistic simulation of interactions between surgical instruments and biological tissues. It involves multi-disciplinary research areas including soft tissue mechanical behavior, tool-tissue contact mechanics, computer haptics, computer graphics and robotics integrated into VR-based training systems. The research described in this paper addresses the problem of characterizing soft tissue properties for medical virtual environments. A system to measure in vivo mechanical properties of soft tissues was designed, and eleven sets of animal experiments were performed to measure in vivo and in vitro biomechanical properties of porcine intra-abdominal organs. Viscoelastic tissue parameters were then extracted by matching finite element model predictions with the empirical data. Finally, the tissue parameters were combined with geometric organ models segmented from the Visible Human Dataset and integrated into a minimally invasive surgical simulation system consisting of haptic interface devices and a graphic display.

  • PDF

ViStoryNet: Neural Networks with Successive Event Order Embedding and BiLSTMs for Video Story Regeneration (ViStoryNet: 비디오 스토리 재현을 위한 연속 이벤트 임베딩 및 BiLSTM 기반 신경망)

  • Heo, Min-Oh;Kim, Kyung-Min;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.138-144
    • /
    • 2018
  • A video is a vivid medium similar to human's visual-linguistic experiences, since it can inculcate a sequence of situations, actions or dialogues that can be told as a story. In this study, we propose story learning/regeneration frameworks from videos with successive event order supervision for contextual coherence. The supervision induces each episode to have a form of trajectory in the latent space, which constructs a composite representation of ordering and semantics. In this study, we incorporated the use of kids videos as a training data. Some of the advantages associated with the kids videos include omnibus style, simple/explicit storyline in short, chronological narrative order, and relatively limited number of characters and spatial environments. We build the encoder-decoder structure with successive event order embedding, and train bi-directional LSTMs as sequence models considering multi-step sequence prediction. Using a series of approximately 200 episodes of kids videos named 'Pororo the Little Penguin', we give empirical results for story regeneration tasks and SEOE. In addition, each episode shows a trajectory-like shape on the latent space of the model, which gives the geometric information for the sequence models.