• Title/Summary/Keyword: High Train

Search Result 2,643, Processing Time 0.027 seconds

A Study on the Analysis of Miles Training Effect (마일즈 훈련효과 분석에 관한 연구)

  • Lee, Yong-Yeon;Lee, Ho Jun;Kim, Yong-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.353-359
    • /
    • 2021
  • The Army is constructing a training system using Miles equipment that applies the latest science and technology to carry out military training. The Miles training system is a system that uses Miles equipment to simulate the damage situation of combat personnel and equipment in the same way as an actual battlefield by conducting practiced maneuvers in the field. Through this, the training force can experience conditions similar to an actual battle. In particular, the training effects of the warriors participating in the training can be maximized by establishing an integrated system that utilizes cutting-edge science technologies, such as information communication and computer simulation. This study analyzed the effects of Miles training in the army using scientific techniques targeted at the mid-range Miles. In particular, the effect index for analyzing the training effect was derived from a literature survey and expert opinions. The weight of each effect index was calculated by applying the Swing method. The final training effect was calculated by combining the results of the survey from train-experienced people. The Miles training effect was 2.6 times more effective than previous training without using Miles, and the satisfaction rate with Miles training according to status was high through variance analysis, and the difference was statistically significant.

Effect on self-enhancement of deep-learning inference by repeated training of false detection cases in tunnel accident image detection (터널 내 돌발상황 오탐지 영상의 반복 학습을 통한 딥러닝 추론 성능의 자가 성장 효과)

  • Lee, Kyu Beom;Shin, Hyu Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.3
    • /
    • pp.419-432
    • /
    • 2019
  • Most of deep learning model training was proceeded by supervised learning, which is to train labeling data composed by inputs and corresponding outputs. Labeling data was directly generated manually, so labeling accuracy of data is relatively high. However, it requires heavy efforts in securing data because of cost and time. Additionally, the main goal of supervised learning is to improve detection performance for 'True Positive' data but not to reduce occurrence of 'False Positive' data. In this paper, the occurrence of unpredictable 'False Positive' appears by trained modes with labeling data and 'True Positive' data in monitoring of deep learning-based CCTV accident detection system, which is under operation at a tunnel monitoring center. Those types of 'False Positive' to 'fire' or 'person' objects were frequently taking place for lights of working vehicle, reflecting sunlight at tunnel entrance, long black feature which occurs to the part of lane or car, etc. To solve this problem, a deep learning model was developed by simultaneously training the 'False Positive' data generated in the field and the labeling data. As a result, in comparison with the model that was trained only by the existing labeling data, the re-inference performance with respect to the labeling data was improved. In addition, re-inference of the 'False Positive' data shows that the number of 'False Positive' for the persons were more reduced in case of training model including many 'False Positive' data. By training of the 'False Positive' data, the capability of field application of the deep learning model was improved automatically.

Detection and Identification of Moving Objects at Busy Traffic Road based on YOLO v4 (YOLO v4 기반 혼잡도로에서의 움직이는 물체 검출 및 식별)

  • Li, Qiutan;Ding, Xilong;Wang, Xufei;Chen, Le;Son, Jinku;Song, Jeong-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.141-148
    • /
    • 2021
  • In some intersections or busy traffic roads, there are more pedestrians in a specific period of time, and there are many traffic accidents caused by road congestion. Especially at the intersection where there are schools nearby, it is particularly important to protect the traffic safety of students in busy hours. In the past, when designing traffic lights, the safety of pedestrians was seldom taken into account, and the identification of motor vehicles and traffic optimization were mostly studied. How to keep the road smooth as far as possible under the premise of ensuring the safety of pedestrians, especially students, will be the key research direction of this paper. This paper will focus on person, motorcycle, bicycle, car and bus recognition research. Through investigation and comparison, this paper proposes to use YOLO v4 network to identify the location and quantity of objects. YOLO v4 has the characteristics of strong ability of small target recognition, high precision and fast processing speed, and sets the data acquisition object to train and test the image set. Using the statistics of the accuracy rate, error rate and omission rate of the target in the video, the network trained in this paper can accurately and effectively identify persons, motorcycles, bicycles, cars and buses in the moving images.

Transfer Learning Backbone Network Model Analysis for Human Activity Classification Using Imagery (영상기반 인체행위분류를 위한 전이학습 중추네트워크모델 분석)

  • Kim, Jong-Hwan;Ryu, Junyeul
    • Journal of the Korea Society for Simulation
    • /
    • v.31 no.1
    • /
    • pp.11-18
    • /
    • 2022
  • Recently, research to classify human activity using imagery has been actively conducted for the purpose of crime prevention and facility safety in public places and facilities. In order to improve the performance of human activity classification, most studies have applied deep learning based-transfer learning. However, despite the increase in the number of backbone network models that are the basis of deep learning as well as the diversification of architectures, research on finding a backbone network model suitable for the purpose of operation is insufficient due to the atmosphere of using a certain model. Thus, this study applies the transfer learning into recently developed deep learning backborn network models to build an intelligent system that classifies human activity using imagery. For this, 12 types of active and high-contact human activities based on sports, not basic human behaviors, were determined and 7,200 images were collected. After 20 epochs of transfer learning were equally applied to five backbone network models, we quantitatively analyzed them to find the best backbone network model for human activity classification in terms of learning process and resultant performance. As a result, XceptionNet model demonstrated 0.99 and 0.91 in training and validation accuracy, 0.96 and 0.91 in Top 2 accuracy and average precision, 1,566 sec in train process time and 260.4MB in model memory size. It was confirmed that the performance of XceptionNet was higher than that of other models.

Survey Results to Understand the Current Status of Pest Management in Farms (농가의 병해충 관리 현황 이해를 위한 설문조사 결과)

  • Kwon, D.H.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.23 no.2
    • /
    • pp.87-97
    • /
    • 2021
  • To investigate the current pest management status in Korea, a survey was conducted from 151 students and graduates in the Korea National College of Agriculture and Fisheries (KNCAF) by on-line. The questionnaire consists of two divisions, basic questions and pest control questions. The basic questions were including the respondent's age, academic status, cultivating crops and cultivating area. The pest control questions were including pest control methods, pesticide selection rationale, and pest forecasting methods. As a summary of basic questions, the respondents in their 20s accounted for 91.2%. Moreover, 34.5% of the respondents had over 3 hectares of cultivating area. The cultivating methods were differed by cultivating crops. As a summary of pest control questions, major control methods were using the conventional chemicals (>66%). To understand the pesticide selection rationale, farmers/respondents made their own decisions based on existing control techniques (30%) or depended on the decisions of pesticide vendors (29%). As for the pest forecasting method, it was mainly conducted by the Rural Development Administration affiliated organization (29%) and the National Crop Pest Management System (27%). Regarding the reliability of the pest diagnosis and pesticide prescription of pesticide vendors, 97% of the respondents marked above average. However, there was no choice on strong reliability. Interestingly, 79% of the respondents agreed to train experts for pest diagnosis and pesticide prescription with high necessity and, in particular, 47% of respondents were very strongly supported. These results suggest that the farmers might be need more qualified experts in pest diagnosis and pesticide prescriptions. Taken together, these survey results would provide important information to understand the current status of pest management by farmers' point of view and useful to set the direction of pest control.

Change Detection Using Deep Learning Based Semantic Segmentation for Nuclear Activity Detection and Monitoring (핵 활동 탐지 및 감시를 위한 딥러닝 기반 의미론적 분할을 활용한 변화 탐지)

  • Song, Ahram;Lee, Changhui;Lee, Jinmin;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.991-1005
    • /
    • 2022
  • Satellite imaging is an effective supplementary data source for detecting and verifying nuclear activity. It is also highly beneficial in regions with limited access and information, such as nuclear installations. Time series analysis, in particular, can identify the process of preparing for the conduction of a nuclear experiment, such as relocating equipment or changing facilities. Differences in the semantic segmentation findings of time series photos were employed in this work to detect changes in meaningful items connected to nuclear activity. Building, road, and small object datasets made of KOMPSAT 3/3A photos given by AIHub were used to train deep learning models such as U-Net, PSPNet, and Attention U-Net. To pick relevant models for targets, many model parameters were adjusted. The final change detection was carried out by including object information into the first change detection, which was obtained as the difference in semantic segmentation findings. The experiment findings demonstrated that the suggested approach could effectively identify altered pixels. Although the suggested approach is dependent on the accuracy of semantic segmentation findings, it is envisaged that as the dataset for the region of interest grows in the future, so will the relevant scope of the proposed method.

Flood Mapping Using Modified U-NET from TerraSAR-X Images (TerraSAR-X 영상으로부터 Modified U-NET을 이용한 홍수 매핑)

  • Yu, Jin-Woo;Yoon, Young-Woong;Lee, Eu-Ru;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1709-1722
    • /
    • 2022
  • The rise in temperature induced by global warming caused in El Nino and La Nina, and abnormally changed the temperature of seawater. Rainfall concentrates in some locations due to abnormal variations in seawater temperature, causing frequent abnormal floods. It is important to rapidly detect flooded regions to recover and prevent human and property damage caused by floods. This is possible with synthetic aperture radar. This study aims to generate a model that directly derives flood-damaged areas by using modified U-NET and TerraSAR-X images based on Multi Kernel to reduce the effect of speckle noise through various characteristic map extraction and using two images before and after flooding as input data. To that purpose, two synthetic aperture radar (SAR) images were preprocessed to generate the model's input data, which was then applied to the modified U-NET structure to train the flood detection deep learning model. Through this method, the flood area could be detected at a high level with an average F1 score value of 0.966. This result is expected to contribute to the rapid recovery of flood-stricken areas and the derivation of flood-prevention measures.

Design of Simulation Prototype UI for Virtual Reality-based Air Blast and Vibration (가상현실 기반 발파소음 및 진동 시뮬레이션 UI 설계)

  • Lee, Dongyoun;Lee, Sang Gyu;Seo, Myoung Bae
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.35-44
    • /
    • 2021
  • Recently, the new subway project called "Great Train Express" is in progress. During the tunnel excavation in the center of city, vibration and noise are generated, which make an uncomfortable effect on nearby residents. In order to prepare for this situation, the construction company generally establishes a noise and vibration management plan at the site from the construction planning stage through consultation with the residents of nearby areas and establishment of countermeasures for complaints raised. However, despite the establishment of a noise and vibration management plan, civil complaints have not been fundamentally resolved due to occurring noise and vibration during the construction in progress. In order to solve this problems, one of the best solution is to provide noise and vibration simulation technology with a high sense of reality and immersion for residents of nearby areas. Considering the ease and convenience of using the system, we intend to develop a UI(User Interface) necessary for the development of a simulation system that can directly experience the air blast and vibration based on virtual reality. The results of this study are expected to contribute to the development of virtual reality-based air blast and vibration simulations in the future.

Evaluation of Human Demonstration Augmented Deep Reinforcement Learning Policies via Object Manipulation with an Anthropomorphic Robot Hand (휴먼형 로봇 손의 사물 조작 수행을 이용한 사람 데모 결합 강화학습 정책 성능 평가)

  • Park, Na Hyeon;Oh, Ji Heon;Ryu, Ga Hyun;Lopez, Patricio Rivera;Anazco, Edwin Valarezo;Kim, Tae Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.5
    • /
    • pp.179-186
    • /
    • 2021
  • Manipulation of complex objects with an anthropomorphic robot hand like a human hand is a challenge in the human-centric environment. In order to train the anthropomorphic robot hand which has a high degree of freedom (DoF), human demonstration augmented deep reinforcement learning policy optimization methods have been proposed. In this work, we first demonstrate augmentation of human demonstration in deep reinforcement learning (DRL) is effective for object manipulation by comparing the performance of the augmentation-free Natural Policy Gradient (NPG) and Demonstration Augmented NPG (DA-NPG). Then three DRL policy optimization methods, namely NPG, Trust Region Policy Optimization (TRPO), and Proximal Policy Optimization (PPO), have been evaluated with DA (i.e., DA-NPG, DA-TRPO, and DA-PPO) and without DA by manipulating six objects such as apple, banana, bottle, light bulb, camera, and hammer. The results show that DA-NPG achieved the average success rate of 99.33% whereas NPG only achieved 60%. In addition, DA-NPG succeeded grasping all six objects while DA-TRPO and DA-PPO failed to grasp some objects and showed unstable performances.

Review of Land Cover Classification Potential in River Spaces Using Satellite Imagery and Deep Learning-Based Image Training Method (딥 러닝 기반 이미지 트레이닝을 활용한 하천 공간 내 피복 분류 가능성 검토)

  • Woochul, Kang;Eun-kyung, Jang
    • Ecology and Resilient Infrastructure
    • /
    • v.9 no.4
    • /
    • pp.218-227
    • /
    • 2022
  • This study attempted classification through deep learning-based image training for land cover classification in river spaces which is one of the important data for efficient river management. For this purpose, land cover classification analysis with the RGB image of the target section based on the category classification index of major land cover map was conducted by using the learning outcomes from the result of labeling. In addition, land cover classification of the river spaces was performed by unsupervised and supervised classification from Sentinel-2 satellite images provided in an open format, and this was compared with the results of deep learning-based image classification. As a result of the analysis, it showed more accurate prediction results compared to unsupervised classification results, and it presented significantly improved classification results in the case of high-resolution images. The result of this study showed the possibility of classifying water areas and wetlands in the river spaces, and if additional research is performed in the future, the deep learning based image train method for the land cover classification could be used for river management.