• Title/Summary/Keyword: 입력-출력 모달실험

Search Result 6, Processing Time 0.019 seconds

Estimation of Modal Parameters for Plastic Film-Covered Greenhouse Arches (비닐하우스 아치구조의 모달계수 산정)

  • Cho, Soon-Ho
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.67-74
    • /
    • 2010
  • To a series of vibration records obtained from experimental modal testing using a fixed hammer and roving accelerometers for greenhouse arch structures, modal parameters such as natural frequencies, damping ratios and mode shapes are extracted by applying the two most advanced system identification methods in the frequency-domain up to now, so-called PolyMAX and FDD. The former involves both input and output data, while the latter utilizes only the output data. The possibility of determining the static buckling load, detecting damages, etc., for very slender steel-pipe arches by means of a non-destructive testing method based on vibration measurements is primarily investigated. The extracted modal parameters generally correlated well with those obtained using finite element analysis, demonstrating promising results for further on-going research.

Modal Testing of Arches for Plastic Film-Covered Greenhouses (비닐하우스 아치구조의 모달실험)

  • Cho, Soon-Ho
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.57-65
    • /
    • 2010
  • To determine the static buckling loads and evaluate the structural performance of slender steel pipe-arches such as for greenhouse structures, a series of modal tests using a fixed hammer and roving sensors was carried out, by providing no load, then a range of vertical loads, on an arch rib in several steps. More attention was given to an internal arch where vertical and horizontal auxiliary members are not placed, unlike an end arch. Modal parameters such as natural frequencies, mode shapes and damping ratios were extracted using more advanced system identification methods such as PolyMAX (Polyreference Least-Squares Complex Frequency Domain), and compared with those predicted by commercial FEA (Finite Element Analysis) software ANSYS for various conditions. A good correlation between them was achieved in an overall sense, however the reduction of natural frequencies due to the existence of preaxial loads was not apparent when the vertical load level was about up to 38% of its resistance. Some difficulties related to the field testing and parameter extraction for a very slender arch, as might arise from the influences of neighboring members, are carefully discussed.

Tilt-based Photo Browsing UI on Mobile Devices (휴대기기에서의 기울임 기반 사진 감상 UI)

  • Jo, Seong-Jeong;Murray-Smith, Roderick;Choe, Chang-Gyu;Seong, Yeong-Hun;Lee, Gwang-Hyeon;Kim, Yeon-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.429-434
    • /
    • 2007
  • 본 논문에서는 기울임 동작에 기반한 휴대기기 상의 사진 감상 UI를 제시하고 사용성을 평가한다. 기존의 기울임 입력 방식의 세가지 조작성 문제(overshooting, fluctuation, 부분 이미지 제시)를 개선하기 위하여, 사진 위치 및 기울임에 의존하는 사진 이동 제어 dynamics 모델을 제안한다. 본 시스템은 기울임 감지용 가속도 센서, 기울임에 의한 사진 이동 제어 dynamics 모델, 다중모달(시각, 청각, 촉각)을 통한 모델 상태 출력부로 구성된다. 센서 입력과 다중 모달 출력을 위하여, 삼성 MITs 4300 PDA의 배터리 팩을 개조하여 3축 가속도 센서와 진동 출력장치 (VBW32)를 장착하였다. 제안하는 시스템은 기존의 대표적인 사진 감상 입력 방법인 버튼과 iPod wheel과 비교하였다. 정량적 비교를 위하여 7명의 사용자에게 100장의 사진 중 20장을 차례로 검색하는 과제를 부여하면서 수집한 로그를 분석하였으며, 정성적인 비교를 위하여 설문 조사를 실시하였다. 실험 결과 제안한 방법이 기존 기울임 기반 dynamics에 비하여 overshooting 횟수를 30%, 사진간 이동 거리를 25%, 이동 시간을 17% 감소하였다. 또한 제안한 방법이 버튼과 유사한 조작성을 갖고 있으며, 버튼과 iPod보다 더욱 흥미성이 뛰어났다. 상업적으로 뛰어난 성공을 거둔 iPod이 다수의 overshooting 발생으로 실제로는 사용성이 떨어진다는 점이 예상치 못한 흥미로운 발견이었다.

  • PDF

The optimization of processing condition of dissimilar material bonding using the 60 kHz ultrasonic transducer (60 kHz 초음파 공구 혼을 이용한 이종재료접합의 공정조건 최적화)

  • Lee, DongWook;Jeon, EuySick
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.3
    • /
    • pp.991-996
    • /
    • 2013
  • In this paper, the ultrasonic horn having the natural frequency with 60 [kHz] for the dissimilar material bonding of the glass and solder tried to be designed. The ultrasonic horn was designed through the relational formula including the aspect ratio of the input terminal and output terminal, length of the ultrasonic horn. The modal analysis was performed for the propriety analysis of the designed horn. The parameters and response was set through the basic experiment. The dissimilar material bonding strength analysis using the ultrasonic transducer was done. The optimal process parameters having maximum bonding strength was derived.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.