• Title/Summary/Keyword: 스마트러닝 환경

Search Result 219, Processing Time 0.022 seconds

Non-face-to-face online home training application study using deep learning-based image processing technique and standard exercise program (딥러닝 기반 영상처리 기법 및 표준 운동 프로그램을 활용한 비대면 온라인 홈트레이닝 어플리케이션 연구)

  • Shin, Youn-ji;Lee, Hyun-ju;Kim, Jun-hee;Kwon, Da-young;Lee, Seon-ae;Choo, Yun-jin;Park, Ji-hye;Jung, Ja-hyun;Lee, Hyoung-suk;Kim, Joon-ho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.3
    • /
    • pp.577-582
    • /
    • 2021
  • Recently, with the development of AR, VR, and smart device technologies, the demand for services based on non-face-to-face environments is also increasing in the fitness industry. The non-face-to-face online home training service has the advantage of not being limited by time and place compared to the existing offline service. However, there are disadvantages including the absence of exercise equipment, difficulty in measuring the amount of exercise and chekcing whether the user maintains an accurate exercise posture or not. In this study, we develop a standard exercise program that can compensate for these shortcomings and propose a new non-face-to-face home training application by using a deep learning-based body posture estimation image processing algorithm. This application allows the user to directly watch and follow the trainer of the standard exercise program video, correct the user's own posture, and perform an accurate exercise. Furthermore, if the results of this study are customized according to their purpose, it will be possible to apply them to performances, films, club activities, and conferences

DECODE: A Novel Method of DEep CNN-based Object DEtection using Chirps Emission and Echo Signals in Indoor Environment (실내 환경에서 Chirp Emission과 Echo Signal을 이용한 심층신경망 기반 객체 감지 기법)

  • Nam, Hyunsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.59-66
    • /
    • 2021
  • Humans mainly recognize surrounding objects using visual and auditory information among the five senses (sight, hearing, smell, touch, taste). Major research related to the latest object recognition mainly focuses on analysis using image sensor information. In this paper, after emitting various chirp audio signals into the observation space, collecting echoes through a 2-channel receiving sensor, converting them into spectral images, an object recognition experiment in 3D space was conducted using an image learning algorithm based on deep learning. Through this experiment, the experiment was conducted in a situation where there is noise and echo generated in a general indoor environment, not in the ideal condition of an anechoic room, and the object recognition through echo was able to estimate the position of the object with 83% accuracy. In addition, it was possible to obtain visual information through sound through learning of 3D sound by mapping the inference result to the observation space and the 3D sound spatial signal and outputting it as sound. This means that the use of various echo information along with image information is required for object recognition research, and it is thought that this technology can be used for augmented reality through 3D sound.

CNN-based Shadow Detection Method using Height map in 3D Virtual City Model (3차원 가상도시 모델에서 높이맵을 이용한 CNN 기반의 그림자 탐지방법)

  • Yoon, Hee Jin;Kim, Ju Wan;Jang, In Sung;Lee, Byung-Dai;Kim, Nam-Gi
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.55-63
    • /
    • 2019
  • Recently, the use of real-world image data has been increasing to express realistic virtual environments in various application fields such as education, manufacturing, and construction. In particular, with increasing interest in digital twins like smart cities, realistic 3D urban models are being built using real-world images, such as aerial images. However, the captured aerial image includes shadows from the sun, and the 3D city model including the shadows has a problem of distorting and expressing information to the user. Many studies have been conducted to remove the shadow, but it is recognized as a challenging problem that is still difficult to solve. In this paper, we construct a virtual environment dataset including the height map of buildings using 3D spatial information provided by VWorld, and We propose a new shadow detection method using height map and deep learning. According to the experimental results, We can observed that the shadow detection error rate is reduced when using the height map.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

A Design and Analysis of Pressure Predictive Model for Oscillating Water Column Wave Energy Converters Based on Machine Learning (진동수주 파력발전장치를 위한 머신러닝 기반 압력 예측모델 설계 및 분석)

  • Seo, Dong-Woo;Huh, Taesang;Kim, Myungil;Oh, Jae-Won;Cho, Su-Gil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.11
    • /
    • pp.672-682
    • /
    • 2020
  • The Korea Nowadays, which is research on digital twin technology for efficient operation in various industrial/manufacturing sites, is being actively conducted, and gradual depletion of fossil fuels and environmental pollution issues require new renewable/eco-friendly power generation methods, such as wave power plants. In wave power generation, however, which generates electricity from the energy of waves, it is very important to understand and predict the amount of power generation and operational efficiency factors, such as breakdown, because these are closely related by wave energy with high variability. Therefore, it is necessary to derive a meaningful correlation between highly volatile data, such as wave height data and sensor data in an oscillating water column (OWC) chamber. Secondly, the methodological study, which can predict the desired information, should be conducted by learning the prediction situation with the extracted data based on the derived correlation. This study designed a workflow-based training model using a machine learning framework to predict the pressure of the OWC. In addition, the validity of the pressure prediction analysis was verified through a verification and evaluation dataset using an IoT sensor data to enable smart operation and maintenance with the digital twin of the wave generation system.

Construction of a Bark Dataset for Automatic Tree Identification and Developing a Convolutional Neural Network-based Tree Species Identification Model (수목 동정을 위한 수피 분류 데이터셋 구축과 합성곱 신경망 기반 53개 수종의 동정 모델 개발)

  • Kim, Tae Kyung;Baek, Gyu Heon;Kim, Hyun Seok
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.2
    • /
    • pp.155-164
    • /
    • 2021
  • Many studies have been conducted on developing automatic plant identification algorithms using machine learning to various plant features, such as leaves and flowers. Unlike other plant characteristics, barks show only little change regardless of the season and are maintained for a long period. Nevertheless, barks show a complex shape with a large variation depending on the environment, and there are insufficient materials that can be utilized to train algorithms. Here, in addition to the previously published bark image dataset, BarkNet v.1.0, images of barks were collected, and a dataset consisting of 53 tree species that can be easily observed in Korea was presented. A convolutional neural network (CNN) was trained and tested on the dataset, and the factors that interfere with the model's performance were identified. For CNN architecture, VGG-16 and 19 were utilized. As a result, VGG-16 achieved 90.41% and VGG-19 achieved 92.62% accuracy. When tested on new tree images that do not exist in the original dataset but belong to the same genus or family, it was confirmed that more than 80% of cases were successfully identified as the same genus or family. Meanwhile, it was found that the model tended to misclassify when there were distracting features in the image, including leaves, mosses, and knots. In these cases, we propose that random cropping and classification by majority votes are valid for improving possible errors in training and inferences.

Towards Real Time Detection of Rice Weed in Uncontrolled Crop Conditions (통제되지 않는 농작물 조건에서 쌀 잡초의 실시간 검출에 관한 연구)

  • Umraiz, Muhammad;Kim, Sang-cheol
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.1
    • /
    • pp.83-95
    • /
    • 2020
  • Being a dense and complex task of precisely detecting the weeds in practical crop field environment, previous approaches lack in terms of speed of processing image frames with accuracy. Although much of the attention has been given to classify the plants diseases but detecting crop weed issue remained in limelight. Previous approaches report to use fast algorithms but inference time is not even closer to real time, making them impractical solutions to be used in uncontrolled conditions. Therefore, we propose a detection model for the complex rice weed detection task. Experimental results show that inference time in our approach is reduced with a significant margin in weed detection task, making it practically deployable application in real conditions. The samples are collected at two different growth stages of rice and annotated manually

A Personal Video Event Classification Method based on Multi-Modalities by DNN-Learning (DNN 학습을 이용한 퍼스널 비디오 시퀀스의 멀티 모달 기반 이벤트 분류 방법)

  • Lee, Yu Jin;Nang, Jongho
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1281-1297
    • /
    • 2016
  • In recent years, personal videos have seen a tremendous growth due to the substantial increase in the use of smart devices and networking services in which users create and share video content easily without many restrictions. However, taking both into account would significantly improve event detection performance because videos generally have multiple modalities and the frame data in video varies at different time points. This paper proposes an event detection method. In this method, high-level features are first extracted from multiple modalities in the videos, and the features are rearranged according to time sequence. Then the association of the modalities is learned by means of DNN to produce a personal video event detector. In our proposed method, audio and image data are first synchronized and then extracted. Then, the result is input into GoogLeNet as well as Multi-Layer Perceptron (MLP) to extract high-level features. The results are then re-arranged in time sequence, and every video is processed to extract one feature each for training by means of DNN.

User Satisfaction Analysis on Similarity-based Inference Insect Search Method in u-Learning Insect Observation using Smart Phone (스마트폰을 이용한 유러닝 곤충관찰학습에 있어서 유사곤충 추론검색기법의 사용자 만족도 분석)

  • Jun, Eung Sup
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.1
    • /
    • pp.203-213
    • /
    • 2014
  • In this study, we proposed a new model with ISOIA (Insect Search by Observation based on Insect Appearance) method based on observation by insect appearance to improve user satisfaction, and compared it with the ISBC and ISOBC methods. In order to test these three insect search systems with AHP method, we derived three evaluation criteria for user satisfaction and three sub-evaluation criteria by evaluation criterion. In the ecological environment, non-experts need insect search systems to identify insect species and to get u-Learning contents related to the insects. To assist the public the non-experts, ISBC (Insect Search by Biological Classification) method based on biological classification to search insects and ISOBC (Insect Search by Observation based on Biological Classification) method based on the inference that identifies the observed insect through observation according to biological classification have been provided. In the test results, we found the order of priorities was ISOIA, ISOBC, and ISBC. It shows that the ISOIA system proposed in this study is superior in usage and quality compared with the previous insect search systems.

RTE System based on CBT for Effective Office SW Education (효과적인 오피스 SW 교육을 위한 CBT 기반의 RTE(Real Training Environment)시스템)

  • Kim, Seongyeol;Hong, Byeongdu
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.3
    • /
    • pp.375-387
    • /
    • 2013
  • Advanced internet service and smart equipment have caused an environment supporting various online learning anytime and anywhere, which requires learning contents optimized on a new media. Among various on/off line education related to IT, most part if it is office SW. Many oh them cannot make a good education for effective training in practical because many instructors are tend to focus on teaching simple function and use examples of formality repeatedly. In this paper we propose a new office SW education system that make use of LET(Live EduTainer) based on RTE(Real Training Environment) which maximize the effect of learning and it is integrated with GBL(Game Based Learning) which gives rise to interesting in a knowledge as well as simple teaching so that learners are absorbed on it. We'll elaborate a method for teaching and learning required in this system, design and configuration of the system.