• Title/Summary/Keyword: self-contained system

Search Result 72, Processing Time 0.017 seconds

Motion Analysis of Light Buoys Combined with 7 Nautical Mile Self-Contained Lantern (7마일 등명기를 결합한 경량화 등부표의 운동 해석)

  • Son, Bo-Hun;Ko, Seok-Won;Yang, Jae-Hyoung;Jeong, Se-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.5
    • /
    • pp.628-636
    • /
    • 2018
  • Because large buoys are mainly made of steel, they are heavy and vulnerable to corrosion by sea water. This makes buoy installation and maintenance difficult. Moreover, vessel collision accidents with buoys and damage to vessels due to the material of buoys (e.g., steel) are reported every year. Recently, light buoys adopting eco-friendly and lightweight materials have come into the spotlight in order to solve the previously-mentioned problems. In Korea, a new lightweight buoy with a 7-Nautical Mile lantern adopting expanded polypropylene (EPP) and aluminum to create a buoyant body and tower structure, respectively, was developed in 2017. When these light buoys are operated in the ocean, the visibility and angle of light from the lantern installed on the light buoys changes, which may cause them to function improperly. Therefore, research on the performance of light buoys is needed since the weight distribution and motion characteristics of these new buoys differ from conventional models. In this study, stability estimation and motion analyses for newly-developed buoys under various environmental conditions considering a mooring line were carried out using ANSYS AQWA. Numerical simulations for the estimation of wind and current loads were performed using commercial CFD software, Siemens STAR-CCM+, to increase the accuracy of motion analysis. By comparing the estimated maximum significant motions of the light buoys, it was found that waves and currents were more influential in the motion of the buoys. And, the estimated motions of the buoys became larger as the sea state became worser, which might be the reason that the peak frequencies of the wave spectra got closer to those of the buoys.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.