• Title/Summary/Keyword: network module

Search Result 1,437, Processing Time 0.022 seconds

Multidrop Ethernet based IoT Architecture Design for VLBI System Control and Monitor (VLBI 시스템 제어 및 모니터를 위한 멀티드롭 이더넷 기반 IoT 아키텍처 설계)

  • Song, Min-Gyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1159-1168
    • /
    • 2020
  • In the past, control and monitor of a large number of instruments is a specialized area, which requires an expensive dedicated module to implement. However, with the recent development of embedded technology, various products capable of performing M&C (Monitor and Control) have been released, and the scope of application is expanding. Accordingly, it is possible to more easily build a small M&C environment than before. In this paper, we discussed a method to replace the M&C of the VLBI system, which had to be implemented through a specialized hardware product, with an inexpensive general imbeded technology. Memory based data transmission, reception and storage is a technology that is already generalized not only in VLBI but also in the network field, and more effective M&C can be implemented when some items of Ethernet are optimized for the VLBI (Very Long Baseline Interferometer) system environment. In this paper, we discuss in depth the design and implementation for the multidrop based IoT architecture.

Implementation of a Classification System for Dog Behaviors using YOLI-based Object Detection and a Node.js Server (YOLO 기반 개체 검출과 Node.js 서버를 이용한 반려견 행동 분류 시스템 구현)

  • Jo, Yong-Hwa;Lee, Hyuek-Jae;Kim, Young-Hun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.1
    • /
    • pp.29-37
    • /
    • 2020
  • This paper implements a method of extracting an object about a dog through real-time image analysis and classifying dog behaviors from the extracted images. The Darknet YOLO was used to detect dog objects, and the Teachable Machine provided by Google was used to classify behavior patterns from the extracted images. The trained Teachable Machine is saved in Google Drive and can be used by ml5.js implemented on a node.js server. By implementing an interactive web server using a socket.io module on the node.js server, the classified results are transmitted to the user's smart phone or PC in real time so that it can be checked anytime, anywhere.

Path selection algorithm for multi-path system based on deep Q learning (Deep Q 학습 기반의 다중경로 시스템 경로 선택 알고리즘)

  • Chung, Byung Chang;Park, Heasook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.50-55
    • /
    • 2021
  • Multi-path system is a system in which utilizes various networks simultaneously. It is expected that multi-path system can enhance communication speed, reliability, security of network. In this paper, we focus on path selection in multi-path system. To select optimal path, we propose deep reinforcement learning algorithm which is rewarded by the round-trip-time (RTT) of each networks. Unlike multi-armed bandit model, deep Q learning is applied to consider rapidly changing situations. Due to the delay of RTT data, we also suggest compensation algorithm of the delayed reward. Moreover, we implement testbed learning server to evaluate the performance of proposed algorithm. The learning server contains distributed database and tensorflow module to efficiently operate deep learning algorithm. By means of simulation, we showed that the proposed algorithm has better performance than lowest RTT about 20%.

Assembly Performance Evaluation for Prefabricated Steel Structures Using k-nearest Neighbor and Vision Sensor (k-근접 이웃 및 비전센서를 활용한 프리팹 강구조물 조립 성능 평가 기술)

  • Bang, Hyuntae;Yu, Byeongjun;Jeon, Haemin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.35 no.5
    • /
    • pp.259-266
    • /
    • 2022
  • In this study, we developed a deep learning and vision sensor-based assembly performance evaluation method isfor prefabricated steel structures. The assembly parts were segmented using a modified version of the receptive field block convolution module inspired by the eccentric function of the human visual system. The quality of the assembly was evaluated by detecting the bolt holes in the segmented assembly part and calculating the bolt hole positions. To validate the performance of the evaluation, models of standard and defective assembly parts were produced using a 3D printer. The assembly part segmentation network was trained based on the 3D model images captured from a vision sensor. The sbolt hole positions in the segmented assembly image were calculated using image processing techniques, and the assembly performance evaluation using the k-nearest neighbor algorithm was verified. The experimental results show that the assembly parts were segmented with high precision, and the assembly performance based on the positions of the bolt holes in the detected assembly part was evaluated with a classification error of less than 5%.

Question Similarity Measurement of Chinese Crop Diseases and Insect Pests Based on Mixed Information Extraction

  • Zhou, Han;Guo, Xuchao;Liu, Chengqi;Tang, Zhan;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.3991-4010
    • /
    • 2021
  • The Question Similarity Measurement of Chinese Crop Diseases and Insect Pests (QSM-CCD&IP) aims to judge the user's tendency to ask questions regarding input problems. The measurement is the basis of the Agricultural Knowledge Question and Answering (Q & A) system, information retrieval, and other tasks. However, the corpus and measurement methods available in this field have some deficiencies. In addition, error propagation may occur when the word boundary features and local context information are ignored when the general method embeds sentences. Hence, these factors make the task challenging. To solve the above problems and tackle the Question Similarity Measurement task in this work, a corpus on Chinese crop diseases and insect pests(CCDIP), which contains 13 categories, was established. Then, taking the CCDIP as the research object, this study proposes a Chinese agricultural text similarity matching model, namely, the AgrCQS. This model is based on mixed information extraction. Specifically, the hybrid embedding layer can enrich character information and improve the recognition ability of the model on the word boundary. The multi-scale local information can be extracted by multi-core convolutional neural network based on multi-weight (MM-CNN). The self-attention mechanism can enhance the fusion ability of the model on global information. In this research, the performance of the AgrCQS on the CCDIP is verified, and three benchmark datasets, namely, AFQMC, LCQMC, and BQ, are used. The accuracy rates are 93.92%, 74.42%, 86.35%, and 83.05%, respectively, which are higher than that of baseline systems without using any external knowledge. Additionally, the proposed method module can be extracted separately and applied to other models, thus providing reference for related research.

Key Frame Detection Using Contrastive Learning (대조적 학습을 활용한 주요 프레임 검출 방법)

  • Kyoungtae, Park;Wonjun, Kim;Ryong, Lee;Rae-young, Lee;Myung-Seok, Choi
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.897-905
    • /
    • 2022
  • Research for video key frame detection has been actively conducted in the fields of computer vision. Recently with the advances on deep learning techniques, performance of key frame detection has been improved, but the various type of video content and complicated background are still a problem for efficient learning. In this paper, we propose a novel method for key frame detection, witch utilizes contrastive learning and memory bank module. The proposed method trains the feature extracting network based on the difference between neighboring frames and frames from separate videos. Founded on the contrastive learning, the method saves and updates key frames in the memory bank, witch efficiently reduce redundancy from the video. Experimental results on video dataset show the effectiveness of the proposed method for key frame detection.

An active learning method with difficulty learning mechanism for crack detection

  • Shu, Jiangpeng;Li, Jun;Zhang, Jiawei;Zhao, Weijian;Duan, Yuanfeng;Zhang, Zhicheng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.195-206
    • /
    • 2022
  • Crack detection is essential for inspection of existing structures and crack segmentation based on deep learning is a significant solution. However, datasets are usually one of the key issues. When building a new dataset for deep learning, laborious and time-consuming annotation of a large number of crack images is an obstacle. The aim of this study is to develop an approach that can automatically select a small portion of the most informative crack images from a large pool in order to annotate them, not to label all crack images. An active learning method with difficulty learning mechanism for crack segmentation tasks is proposed. Experiments are carried out on a crack image dataset of a steel box girder, which contains 500 images of 320×320 size for training, 100 for validation, and 190 for testing. In active learning experiments, the 500 images for training are acted as unlabeled image. The acquisition function in our method is compared with traditional acquisition functions, i.e., Query-By-Committee (QBC), Entropy, and Core-set. Further, comparisons are made on four common segmentation networks: U-Net, DeepLabV3, Feature Pyramid Network (FPN), and PSPNet. The results show that when training occurs with 200 (40%) of the most informative crack images that are selected by our method, the four segmentation networks can achieve 92%-95% of the obtained performance when training takes place with 500 (100%) crack images. The acquisition function in our method shows more accurate measurements of informativeness for unlabeled crack images compared to the four traditional acquisition functions at most active learning stages. Our method can select the most informative images for annotation from many unlabeled crack images automatically and accurately. Additionally, the dataset built after selecting 40% of all crack images can support crack segmentation networks that perform more than 92% when all the images are used.

Super-Resolution Transmission Electron Microscope Image of Nanomaterials Using Deep Learning (딥러닝을 이용한 나노소재 투과전자 현미경의 초해상 이미지 획득)

  • Nam, Chunghee
    • Korean Journal of Materials Research
    • /
    • v.32 no.8
    • /
    • pp.345-353
    • /
    • 2022
  • In this study, using deep learning, super-resolution images of transmission electron microscope (TEM) images were generated for nanomaterial analysis. 1169 paired images with 256 × 256 pixels (high resolution: HR) from TEM measurements and 32 × 32 pixels (low resolution: LR) produced using the python module openCV were trained with deep learning models. The TEM images were related to DyVO4 nanomaterials synthesized by hydrothermal methods. Mean-absolute-error (MAE), peak-signal-to-noise-ratio (PSNR), and structural similarity (SSIM) were used as metrics to evaluate the performance of the models. First, a super-resolution image (SR) was obtained using the traditional interpolation method used in computer vision. In the SR image at low magnification, the shape of the nanomaterial improved. However, the SR images at medium and high magnification failed to show the characteristics of the lattice of the nanomaterials. Second, to obtain a SR image, the deep learning model includes a residual network which reduces the loss of spatial information in the convolutional process of obtaining a feature map. In the process of optimizing the deep learning model, it was confirmed that the performance of the model improved as the number of data increased. In addition, by optimizing the deep learning model using the loss function, including MAE and SSIM at the same time, improved results of the nanomaterial lattice in SR images were achieved at medium and high magnifications. The final proposed deep learning model used four residual blocks to obtain the characteristic map of the low-resolution image, and the super-resolution image was completed using Upsampling2D and the residual block three times.

Window Attention Module Based Transformer for Image Classification (윈도우 주의 모듈 기반 트랜스포머를 활용한 이미지 분류 방법)

  • Kim, Sanghoon;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.538-547
    • /
    • 2022
  • Recently introduced image classification methods using Transformers show remarkable performance improvements over conventional neural network-based methods. In order to effectively consider regional features, research has been actively conducted on how to apply transformers by dividing image areas into multiple window areas, but learning of inter-window relationships is still insufficient. In this paper, to overcome this problem, we propose a transformer structure that can reflect the relationship between windows in learning. The proposed method computes the importance of each window region through compression and a fully connected layer based on self-attention operations for each window region. The calculated importance is scaled to each window area as a learned weight of the relationship between the window areas to re-calibrate the feature value. Experimental results show that the proposed method can effectively improve the performance of existing transformer-based methods.

HDR Video Reconstruction via Content-based Alignment Network (내용 기반의 정렬을 통한 HDR 동영상 생성 방법)

  • Haesoo Chung;Nam Ik Cho
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.185-193
    • /
    • 2023
  • As many different over-the-top (OTT) services become ubiquitous, demands for high-quality content are increasing. However, high dynamic range (HDR) contents, which can provide more realistic scenes, are still insufficient. In this regard, we propose a new HDR video reconstruction technique using multi-exposure low dynamic range (LDR) videos. First, we align a reference and its neighboring frames to compensate for motions between them. In the alignment stage, we perform content-based alignment to improve accuracy, and we also present a high-resolution (HR) module to enhance details. Then, we merge the aligned features to generate a final HDR frame. Experimental results demonstrate that our method outperforms existing methods.