• Title/Summary/Keyword: Large Objects

Search Result 880, Processing Time 0.025 seconds

Integration of Extended IFC-BIM and Ontology for Information Management of Bridge Inspection (확장 IFC-BIM 기반 정보모델과 온톨로지를 활용한 교량 점검데이터 관리방법)

  • Erdene, Khuvilai;Kwon, Tae Ho;Lee, Sang-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.6
    • /
    • pp.411-417
    • /
    • 2020
  • To utilize building information modeling (BIM) technology at the bridge maintenance stage, it is necessary to integrate large quantities of bridge inspection and model data for object-oriented information management. This research aims to establish the benefits of utilizing the extended industry foundation class (IFC)-BIM and ontology for bridge inspection information management. The IFC entities were extended to represent the bridge objects, and a method of generating the extended IFC-based information model was proposed. The bridge inspection ontology was also developed by extraction and classification of inspection concepts from the AASHTO standard. The classified concepts and their relationships were mapped to the ontology based on the semantic triples approach. Finally, the extended IFC-based BIM model was integrated with the ontology for bridge inspection data management. The effectiveness of the proposed framework for bridge inspection information management by integration of the extended IFC-BIM and ontology was tested and verified by extracting bridge inspection data via the SPARQL query.

Design of YOLO-based Removable System for Pet Monitoring (반려동물 모니터링을 위한 YOLO 기반의 이동식 시스템 설계)

  • Lee, Min-Hye;Kang, Jun-Young;Lim, Soon-Ja
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.22-27
    • /
    • 2020
  • Recently, as the number of households raising pets increases due to the increase of single households, there is a need for a system for monitoring the status or behavior of pets. There are regional limitations in the monitoring of pets using domestic CCTVs, which requires a large number of CCTVs or restricts the behavior of pets. In this paper, we propose a mobile system for detecting and tracking cats using deep learning to solve the regional limitations of pet monitoring. We use YOLO (You Look Only Once), an object detection neural network model, to learn the characteristics of pets and apply them to Raspberry Pi to track objects detected in an image. We have designed a mobile monitoring system that connects Raspberry Pi and a laptop via wireless LAN and can check the movement and condition of cats in real time.

In-camera VFX implementation study using short-throw projector (focused on low-cost solution)

  • Li, Penghui;Kim, Ki-Hong;Lee, David-Junesok
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.10-16
    • /
    • 2022
  • As an important part of virtual production, In-camera VFX is the process of shooting actual objects and virtual three-dimensional backgrounds in real-time through computer graphics technology and display technology, and obtaining the final film. In the In-camera VFX process, there are currently only two types of medium used to undertake background imaging, LED wall and chroma key screen. Among them, the In-camera VFX based on LED wall realizes background imaging through LED display technology. Although the imaging quality is guaranteed, the high cost of LED wall increases the cost of virtual production. The In-camera VFX based on chroma key screen, the background imaging is realized by real-time keying technology. Although the price is low, due to the limitation of real-time keying technology and lighting conditions, the usability of the final picture is not high. The short-throw projection technology can compress the projection distance to within 1 meter and get a relatively large picture, which solves the problem of traditional projection technology that must leaving a certain space between screen and the projector, and its price is relatively cheap compared to the LED wall. Therefore, in the In-camera VFX process, short-throw projection technology can be tried to project backgrounds. This paper will analyze the principle of short-throw projection technology and the existing In-camera VFX solutions, and through the comparison experiments, propose a low-cost solution that uses short-throw projectors to project virtual backgrounds and realize the In-camera VFX process.

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

A study on the Generation Method of Aircraft Wing Flexure Data Using Generative Adversarial Networks (생성적 적대 신경망을 이용한 항공기 날개 플렉셔 데이터 생성 방안에 관한 연구)

  • Ryu, Kyung-Don
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.3
    • /
    • pp.179-184
    • /
    • 2022
  • The accurate wing flexure model is required to improve the transfer alignment performance of guided weapon system mounted on a wing of fighter aircraft or armed helicopter. In order to solve this problem, mechanical or stochastical modeling methods have been studying, but modeling accuracy is too low to be applied to weapon systems. The deep learning techniques that have been studying recently are suitable for nonlinear. However, operating fighter aircraft for deep-learning modeling to secure a large amount of data is practically difficult. In this paper, it was used to generate amount of flexure data samples that are similar to the actual flexure data. And it was confirmed that generated data is similar to the actual data by utilizing "measures of similarity" which measures how much alike the two data objects are.

Image Augmentation of Paralichthys Olivaceus Disease Using SinGAN Deep Learning Model (SinGAN 딥러닝 모델을 이용한 넙치 질병 이미지 증강)

  • Son, Hyun Seung;Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.322-330
    • /
    • 2021
  • In modern aquaculture, mass mortality is a very important issue that determines the success of aquaculture business. If a fish disease is not detected at an early stage in the farm, the disease spreads quickly because the farm is a closed environment. Therefore, early detection of diseases is crucial to prevent mass mortality of fish raised in farms. Recently deep learning-based automatic identification of fish diseases has been widely used, but there are many difficulties in identifying objects due to insufficient images of fish diseases. Therefore, this paper suggests a method to generate a large number of fish disease images by synthesizing normal images and disease images using SinGAN deep learning model in order to to solve the lack of fish disease images. We generate images from the three most frequently occurring Paralichthys Olivaceus diseases such as Scuticociliatida, Vibriosis, and Lymphocytosis and compare them with the original image. In this study, a total of 330 sheets of scutica disease, 110 sheets of vibrioemia, and 110 sheets of limphosis were made by synthesizing 10 disease patterns with 11 normal halibut images, and 1,320 images were produced by quadrupling the images.

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

Efficient Forest Fire Detection using Rule-Based Multi-color Space and Correlation Coefficient for Application in Unmanned Aerial Vehicles

  • Anh, Nguyen Duc;Van Thanh, Pham;Lap, Doan Tu;Khai, Nguyen Tuan;Van An, Tran;Tan, Tran Duc;An, Nguyen Huu;Dinh, Dang Nhu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.381-404
    • /
    • 2022
  • Forest fires inflict great losses of human lives and serious damages to ecological systems. Hence, numerous fire detection methods have been proposed, one of which is fire detection based on sensors. However, these methods reveal several limitations when applied in large spaces like forests such as high cost, high level of false alarm, limited battery capacity, and other problems. In this research, we propose a novel forest fire detection method based on image processing and correlation coefficient. Firstly, two fire detection conditions are applied in RGB color space to distinguish between fire pixels and the background. Secondly, the image is converted from RGB to YCbCr color space with two fire detection conditions being applied in this color space. Finally, the correlation coefficient is used to distinguish between fires and objects with fire-like colors. Our proposed algorithm is tested and evaluated on eleven fire and non-fire videos collected from the internet and achieves up to 95.87% and 97.89% of F-score and accuracy respectively in performance evaluation.

A Study of Tram-Pedestrian Collision Prediction Method Using YOLOv5 and Motion Vector (YOLOv5와 모션벡터를 활용한 트램-보행자 충돌 예측 방법 연구)

  • Kim, Young-Min;An, Hyeon-Uk;Jeon, Hee-gyun;Kim, Jin-Pyeong;Jang, Gyu-Jin;Hwang, Hyeon-Chyeol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.561-568
    • /
    • 2021
  • In recent years, autonomous driving technologies have become a high-value-added technology that attracts attention in the fields of science and industry. For smooth Self-driving, it is necessary to accurately detect an object and estimate its movement speed in real time. CNN-based deep learning algorithms and conventional dense optical flows have a large consumption time, making it difficult to detect objects and estimate its movement speed in real time. In this paper, using a single camera image, fast object detection was performed using the YOLOv5 algorithm, a deep learning algorithm, and fast estimation of the speed of the object was performed by using a local dense optical flow modified from the existing dense optical flow based on the detected object. Based on this algorithm, we present a system that can predict the collision time and probability, and through this system, we intend to contribute to prevent tram accidents.

Immediate Effect of Neuromuscular Electrical Stimulation on Balance and Proprioception During One-leg Standing

  • Je, Jeongwoo;Choi, Woochol Joseph
    • Physical Therapy Korea
    • /
    • v.29 no.3
    • /
    • pp.187-193
    • /
    • 2022
  • Background: Neuromuscular electrical stimulation (NMES) is a physical modality used to activate skeletal muscles for strengthening. While voluntary muscle contraction (VMC) follows the progressive recruitment of motor units in order of size from small to large, NMES-induced muscle contraction occurs in a nonselective and synchronous pattern. Therefore, the outcome of muscle strengthening training using NMES-induced versus voluntary contraction might be different, which might affect balance performance. Objects: We examined how the NMES training affected balance and proprioception. Methods: Forty-four young adults were randomly assigned to NMES and VMC group. All participants performed one-leg standing on a force plate and sat on the Biodex (Biodex R Corp.) to measure balance and ankle proprioception, respectively. All measures were conducted before and after a training session. In NMES group, electric pads were placed on the tibialis anterior, gastrocnemius, and soleus muscles for 20 minutes. In VMC group, co-contraction of the three muscles was conducted. Outcome variables included mean distance, root mean square distance, total excursion, mean velocity, 95% confidence circle area acquired from the center of pressure data, and absolute error of dorsi/plantarflexion. Results: None of outcome variables were associated with group (p > 0.35). However, all but plantarflexion error was associated with time (p < 0.02), and the area and mean velocity were 37.0% and 18.6% lower in post than pre in NMES group, respectively, and 48.9% and 16.7% lower in post than pre in VMC group, respectively. Conclusion: Despite different physiology underlying the NMES-induced versus VMC, both training methods improved balance and ankle joint proprioception.