• Title/Summary/Keyword: 증강학습

Search Result 354, Processing Time 0.023 seconds

Implementation of a Mobile App for Companion Dog Training using AR and Hand Tracking (AR 및 Hand Tracking을 활용한 반려견 훈련 모바일 앱 구현)

  • Chul-Ho Choi;Sung-Wook Park;Se-Hoon Jung;Chun-Bo Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.927-934
    • /
    • 2023
  • With the recent growth of the companion animal market, various social issues related to companion animals have also come to the forefront. Notable problems include incidents of dog bites, the challenge of managing abandoned companion animals, euthanasia, animal abuse, and more. As potential solutions, a variety of training programs such as companion animal-focused broadcasts and educational apps are being offered. However, these options might not be very effective for novice caretakers who are uncertain about what to prioritize in training. While training apps that are relatively easy to access have been widely distributed, apps that allow users to directly engage in training and learn through hands-on experience are still insufficient. In this paper, we propose a more efficient AR-based mobile app for companion animal training, utilizing the Unity engine. The results of usability evaluations indicated increased user engagement due to the inclusion of elements that were previously absent. Moreover, training immersion was enhanced, leading to improved learning outcomes. With further development and subsequent verification and production, we anticipate that this app could become an effective training tool for novice caretakers planning to adopt companion animals, as well as for experienced caretakers.

Image-Based Skin Cancer Classification System Using Attention Layer (Attention layer를 활용한 이미지 기반 피부암 분류 시스템)

  • GyuWon Lee;SungHee Woo
    • Journal of Practical Engineering Education
    • /
    • v.16 no.1_spc
    • /
    • pp.59-64
    • /
    • 2024
  • As the aging population grows, the incidence of cancer is increasing. Skin cancer appears externally, but people often don't notice it or simply overlook it. As a result, if the early detection period is missed, the survival rate in the case of late stage cancer is only 7.5-11%. However, the disadvantage of diagnosing, serious skin cancer is that it requires a lot of time and money, such as a detailed examination and cell tests, rather than simple visual diagnosis. To overcome these challenges, we propose an Attention-based CNN model skin cancer classification system. If skin cancer can be detected early, it can be treated quickly, and the proposed system can greatly help the work of a specialist. To mitigate the problem of image data imbalance according to skin cancer type, this skin cancer classification model applies the Over Sampling, technique to data with a high distribution ratio, and adds a pre-learning model without an Attention layer. This model is then compared to the model without the Attention layer. We also plan to solve the data imbalance problem by strengthening data augmentation techniques for specific classes.

Analysis of research trends in the healthcare field utilizing extended-reality-based converged contents (확장현실 기반 융복합 콘텐츠를 활용한 보건의료 분야의 연구 동향 분석)

  • Ji-Eun Im;Ju-Hee Lee;Soon-Ryun Lim;Won-Jae Lee
    • Journal of Korean society of Dental Hygiene
    • /
    • v.24 no.3
    • /
    • pp.197-208
    • /
    • 2024
  • Objectives: Extended reality technology offers an innovative opportunity to enhance self-directed learning through interactive processes in healthcare education. This study aims to analyze research trends in the application of extended reality technology-based educational content in the field of healthcare. Methods: Through literature search, selection, and exclusion processes based on predefined criteria, we conducted an analysis of domestic healthcare studies employing extended reality technology, ultimately selecting and examining 39 relevant publications. Results: The analysis reveals diverse applications of extended reality across various fields such as medicine, dentistry, and nursing. Positive effects, including increased academic satisfaction, immersion, and interest, are observed, alongside challenges like media usage difficulties and cybersickness. Conclusions: Future research should focus on the development and application of extended reality-based educational content in diverse healthcare curricula, emphasizing both educational approaches and continuous technological advancements.

Research on Mining Technology for Explainable Decision Making (설명가능한 의사결정을 위한 마이닝 기술)

  • Kyungyong Chung
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.4
    • /
    • pp.186-191
    • /
    • 2023
  • Data processing techniques play a critical role in decision-making, including handling missing and outlier data, prediction, and recommendation models. This requires a clear explanation of the validity, reliability, and accuracy of all processes and results. In addition, it is necessary to solve data problems through explainable models using decision trees, inference, etc., and proceed with model lightweight by considering various types of learning. The multi-layer mining classification method that applies the sixth principle is a method that discovers multidimensional relationships between variables and attributes that occur frequently in transactions after data preprocessing. This explains how to discover significant relationships using mining on transactions and model the data through regression analysis. It develops scalable models and logistic regression models and proposes mining techniques to generate class labels through data cleansing, relevance analysis, data transformation, and data augmentation to make explanatory decisions.

Monte Carlo Simulation based Optimal Aiming Point Computation Against Multiple Soft Targets on Ground (몬테칼로 시뮬레이션 기반의 다수 지상 연성표적에 대한 최적 조준점 산출)

  • Kim, Jong-Hwan;Ahn, Nam-Su
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.1
    • /
    • pp.47-55
    • /
    • 2020
  • This paper presents a real-time autonomous computation of shot numbers and aiming points against multiple soft targets on grounds by applying an unsupervised learning, k-mean clustering and Monte carlo simulation. For this computation, a 100 × 200 square meters size of virtual battlefield is created where an augmented enemy infantry platoon unit attacks, defences, and is scatted, and a virtual weapon with a lethal range of 15m is modeled. In order to determine damage types of the enemy unit: no damage, light wound, heavy wound and death, Monte carlo simulation is performed to apply the Carlton damage function for the damage effect of the soft targets. In addition, in order to achieve the damage effectiveness of the enemy units in line with the commander's intention, the optimal shot numbers and aiming point locations are calculated in less than 0.4 seconds by applying the k-mean clustering and repetitive Monte carlo simulation. It is hoped that this study will help to develop a system that reduces the decision time for 'detection-decision-shoot' process in battalion-scaled combat units operating Dronebot combat system.

DECODE: A Novel Method of DEep CNN-based Object DEtection using Chirps Emission and Echo Signals in Indoor Environment (실내 환경에서 Chirp Emission과 Echo Signal을 이용한 심층신경망 기반 객체 감지 기법)

  • Nam, Hyunsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.59-66
    • /
    • 2021
  • Humans mainly recognize surrounding objects using visual and auditory information among the five senses (sight, hearing, smell, touch, taste). Major research related to the latest object recognition mainly focuses on analysis using image sensor information. In this paper, after emitting various chirp audio signals into the observation space, collecting echoes through a 2-channel receiving sensor, converting them into spectral images, an object recognition experiment in 3D space was conducted using an image learning algorithm based on deep learning. Through this experiment, the experiment was conducted in a situation where there is noise and echo generated in a general indoor environment, not in the ideal condition of an anechoic room, and the object recognition through echo was able to estimate the position of the object with 83% accuracy. In addition, it was possible to obtain visual information through sound through learning of 3D sound by mapping the inference result to the observation space and the 3D sound spatial signal and outputting it as sound. This means that the use of various echo information along with image information is required for object recognition research, and it is thought that this technology can be used for augmented reality through 3D sound.

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

A Thoracic Spine Segmentation Technique for Automatic Extraction of VHS and Cobb Angle from X-ray Images (X-ray 영상에서 VHS와 콥 각도 자동 추출을 위한 흉추 분할 기법)

  • Ye-Eun, Lee;Seung-Hwa, Han;Dong-Gyu, Lee;Ho-Joon, Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.1
    • /
    • pp.51-58
    • /
    • 2023
  • In this paper, we propose an organ segmentation technique for the automatic extraction of medical diagnostic indicators from X-ray images. In order to calculate diagnostic indicators of heart disease and spinal disease such as VHS(vertebral heart scale) and Cobb angle, it is necessary to accurately segment the thoracic spine, carina, and heart in a chest X-ray image. A deep neural network model in which the high-resolution representation of the image for each layer and the structure converted into a low-resolution feature map are connected in parallel was adopted. This structure enables the relative position information in the image to be effectively reflected in the segmentation process. It is shown that learning performance can be improved by combining the OCR module, in which pixel information and object information are mutually interacted in a multi-step process, and the channel attention module, which allows each channel of the network to be reflected as different weight values. In addition, a method of augmenting learning data is presented in order to provide robust performance against changes in the position, shape, and size of the subject in the X-ray image. The effectiveness of the proposed theory was evaluated through an experiment using 145 human chest X-ray images and 118 animal X-ray images.

Classification of Diabetic Retinopathy using Mask R-CNN and Random Forest Method

  • Jung, Younghoon;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.29-40
    • /
    • 2022
  • In this paper, we studied a system that detects and analyzes the pathological features of diabetic retinopathy using Mask R-CNN and a Random Forest classifier. Those are one of the deep learning techniques and automatically diagnoses diabetic retinopathy. Diabetic retinopathy can be diagnosed through fundus images taken with special equipment. Brightness, color tone, and contrast may vary depending on the device. Research and development of an automatic diagnosis system using artificial intelligence to help ophthalmologists make medical judgments possible. This system detects pathological features such as microvascular perfusion and retinal hemorrhage using the Mask R-CNN technique. It also diagnoses normal and abnormal conditions of the eye by using a Random Forest classifier after pre-processing. In order to improve the detection performance of the Mask R-CNN algorithm, image augmentation was performed and learning procedure was conducted. Dice similarity coefficients and mean accuracy were used as evaluation indicators to measure detection accuracy. The Faster R-CNN method was used as a control group, and the detection performance of the Mask R-CNN method through this study showed an average of 90% accuracy through Dice coefficients. In the case of mean accuracy it showed 91% accuracy. When diabetic retinopathy was diagnosed by learning a Random Forest classifier based on the detected pathological symptoms, the accuracy was 99%.

Personalized Speech Classification Scheme for the Smart Speaker Accessibility Improvement of the Speech-Impaired people (언어장애인의 스마트스피커 접근성 향상을 위한 개인화된 음성 분류 기법)

  • SeungKwon Lee;U-Jin Choe;Gwangil Jeon
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.17-24
    • /
    • 2022
  • With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly improved the quality of life. However, in the case of speech-impaired people, it is impossible to use the useful services of the smart speaker because they have inaccurate pronunciation due to articulation or speech disorders. In this paper, we propose a personalized voice classification technique for the speech-impaired to use for some of the functions provided by the smart speaker. The goal of this paper is to increase the recognition rate and accuracy of sentences spoken by speech-impaired people even with a small amount of data and a short learning time so that the service provided by the smart speaker can be actually used. In this paper, data augmentation and one cycle learning rate optimization technique were applied while fine-tuning ResNet18 model. Through an experiment, after recording 10 times for each 30 smart speaker commands, and learning within 3 minutes, the speech classification recognition rate was about 95.2%.