• Title/Summary/Keyword: Optimization of Computer Network

Search Result 498, Processing Time 0.026 seconds

Evolutionary Optimization of Neurocontroller for Physically Simulated Compliant-Wing Ornithopter

  • Shim, Yoonsik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.25-33
    • /
    • 2019
  • This paper presents a novel evolutionary framework for optimizing a bio-inspired fully dynamic neurocontroller for the maneuverable flapping flight of a simulated bird-sized ornithopter robot which takes advantage of the morphological computation and mechansensory feedback to improve flight stability. In order to cope with the difficulty of generating robust flapping flight and its maneuver, the wing of robot is modelled as a series of sub-plates joined by passive torsional springs, which implements the simplified version of feathers attached to the forearm skeleton. The neural controller is designed to have a bilaterally symmetric structure which consists of two fully connected neural network modules receiving mirrored sensory inputs from a series of flight navigation sensors as well as feather mechanosensors to let them participate in pattern generation. The synergy of wing compliance and its sensory reflexes gives a possibility that the robot can feel and exploit aerodynamic forces on its wings to potentially contribute to the agility and stability during flight. The evolved robot exhibited target-following flight maneuver using asymmetric wing movements as well as its tail, showing robustness to external aerodynamic disturbances.

Visualization of Malwares for Classification Through Deep Learning (딥러닝 기술을 활용한 멀웨어 분류를 위한 이미지화 기법)

  • Kim, Hyeonggyeom;Han, Seokmin;Lee, Suchul;Lee, Jun-Rak
    • Journal of Internet Computing and Services
    • /
    • v.19 no.5
    • /
    • pp.67-75
    • /
    • 2018
  • According to Symantec's Internet Security Threat Report(2018), Internet security threats such as Cryptojackings, Ransomwares, and Mobile malwares are rapidly increasing and diversifying. It means that detection of malwares requires not only the detection accuracy but also versatility. In the past, malware detection technology focused on qualitative performance due to the problems such as encryption and obfuscation. However, nowadays, considering the diversity of malware, versatility is required in detecting various malwares. Additionally the optimization is required in terms of computing power for detecting malware. In this paper, we present Stream Order(SO)-CNN and Incremental Coordinate(IC)-CNN, which are malware detection schemes using CNN(Convolutional Neural Network) that effectively detect intelligent and diversified malwares. The proposed methods visualize each malware binary file onto a fixed sized image. The visualized malware binaries are learned through GoogLeNet to form a deep learning model. Our model detects and classifies malwares. The proposed method reveals better performance than the conventional method.

A New Head Pose Estimation Method based on Boosted 3-D PCA (새로운 Boosted 3-D PCA 기반 Head Pose Estimation 방법)

  • Lee, Kyung-Min;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.105-109
    • /
    • 2021
  • In this paper, we evaluate Boosted 3-D PCA as a Dataset and evaluate its performance. After that, we will analyze the network features and performance. In this paper, the learning was performed using the 300W-LP data set using the same learning method as Boosted 3-D PCA, and the evaluation was evaluated using the AFLW2000 data set. The results show that the performance is similar to that of the Boosted 3-D PCA paper. This performance result can be learned using the data set of face images freely than the existing Landmark-to-Pose method, so that the poses can be accurately predicted in real-world situations. Since the optimization of the set of key points is not independent, we confirmed the manual that can reduce the computation time. This analysis is expected to be a very important resource for improving the performance of network boosted 3-D PCA or applying it to various application domains.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Research on Deep Learning Performance Improvement for Similar Image Classification (유사 이미지 분류를 위한 딥 러닝 성능 향상 기법 연구)

  • Lim, Dong-Jin;Kim, Taehong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.8
    • /
    • pp.1-9
    • /
    • 2021
  • Deep learning in computer vision has made accelerated improvement over a short period but large-scale learning data and computing power are still essential that required time-consuming trial and error tasks are involved to derive an optimal network model. In this study, we propose a similar image classification performance improvement method based on CR (Confusion Rate) that considers only the characteristics of the data itself regardless of network optimization or data reinforcement. The proposed method is a technique that improves the performance of the deep learning model by calculating the CRs for images in a dataset with similar characteristics and reflecting it in the weight of the Loss Function. Also, the CR-based recognition method is advantageous for image identification with high similarity because it enables image recognition in consideration of similarity between classes. As a result of applying the proposed method to the Resnet18 model, it showed a performance improvement of 0.22% in HanDB and 3.38% in Animal-10N. The proposed method is expected to be the basis for artificial intelligence research using noisy labeled data accompanying large-scale learning data.

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.

LFMMI-based acoustic modeling by using external knowledge (External knowledge를 사용한 LFMMI 기반 음향 모델링)

  • Park, Hosung;Kang, Yoseb;Lim, Minkyu;Lee, Donghyun;Oh, Junseok;Kim, Ji-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.607-613
    • /
    • 2019
  • This paper proposes LF-MMI (Lattice Free Maximum Mutual Information)-based acoustic modeling using external knowledge for speech recognition. Note that an external knowledge refers to text data other than training data used in acoustic model. LF-MMI, objective function for optimization of training DNN (Deep Neural Network), has high performances in discriminative training. In LF-MMI, a phoneme probability as prior probability is used for predicting posterior probability of the DNN-based acoustic model. We propose using external knowledges for training the prior probability model to improve acoustic model based on DNN. It is measured to relative improvement 14 % as compared with the conventional LF-MMI-based model.

System Optimization, Full Data Rate and Transmission Power of Decode-and-Forward Cooperative Communication in WSN (WSN환경에서 Decode-and-Forward 협력통신의 시스템 최적화 및 최대전송률과 저전력에 관한 연구)

  • Kim, Gun-Seok;Kong, Hyung-Yun
    • The KIPS Transactions:PartC
    • /
    • v.14C no.7
    • /
    • pp.597-602
    • /
    • 2007
  • In conventional cooperative communication data rate is 1/2 than non cooperative protocols. In this paper, we propose a full data rate DF (Decode and Forward) cooperative transmission scheme. Proposed scheme is based on time division multiplexing (TDM) channel access. When DF protocol has full data rate, it can not obtain diversity gain under the pairwise error probability (PEP) view point. If it increases time slot to obtain diversity gain, then data rate is reduced. The proposed algorithm uses orthogonal frequency and constellation rotation to obtain both full data rate and diversity order 2. Moreover, performance is analyzed according to distance and optimized components that affect the system performance by using computer simulation. The simulation results revealed that the cooperation can save the network power up to 7dB over direct transmission and 5dB over multi-hop transmission at BER of $10^{-2}$. Besides, it can improve date rate of system compared with the conventional DF protocol.

Performance Reengineering of Embedded Real-Time Systems (내장형 실시간 시스템의 성능 개선을 위한 리엔지니어링 기법)

  • 홍성수
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.5_6
    • /
    • pp.299-306
    • /
    • 2003
  • This paper formulates a problem of embedded real-time system re-engineering, and presents its solution approach. Embedded system re-engineering is defined as a development task of meeting performance requirements newly imposed on a system after its hardware and software have been fully implemented. The performance requirements nay include a real-time throughput and an input-to-output latency. The proposed solution approach is based on a bottleneck analysis and nonlinear optimization. The inputs to the approach include a system design specified with a process network and a set of task graphs, task allocation and scheduling, and a new real-time throughput requirement specified as a system's period constraint. The solution approach works in two steps. In the first step, it determines bottleneck precesses in the process network via estimation of process latencies. In the second step, it derives a system of constraints with performance scaling factors of processing elements being variables. It then solves the constraints for the performance staling factors with an objective of minimizing the total hardware cost of the resultant system. These scaling factors suggest the minimal cost hardware upgrade to meet the new performance requirement. Since this approach does not modify carefully designed software structures, it helps reduce the re-engineering cycle.

Development of UDP based Massive VLBI Data Transfer Program (UDP 기반의 대용량 VLBI 데이터 전송 프로그램 개발)

  • Song, Min-Gyu;Kim, Hyun-Goo;Sohn, Bong-Won;Wi, Seog-Oh;Kang, Yong-Woo;Yeom, Jae-Hwan;Byun, Do-Young;Han, Seog-Tae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.5
    • /
    • pp.37-51
    • /
    • 2010
  • In this paper, we discuss the program implementation and system optimization for the effective transfer of huge amount of data. In VLBI which is observing the celestial bodies by using radio telescope hundreds thousands km apart, it is necessary for each VLBI observatory to transfer up to terabytes of observed data. For this reason, e-VLBI research based on advanced network is being actively carried out for the transfer of data efficiently. Following this trend, in this paper, we discuss design & implementation of system for the high speed Gbps data transfer rates. As a data transfer protocol, we use UDP for designing data transmission program with much higher speeds than currently available via VTP(VLBI Transport Protocol). Tsunami-UDP algorithms is applied to implementing data transfer program so that transmission performance could be maximize, also we make it possible to transfer observed data more fast and reliable through optimization of computer systems in each VLBI statopm.