• Title/Summary/Keyword: optimisation algorithms

Search Result 17, Processing Time 0.021 seconds

New-directional Approach : Plastic Collapse Design of Grillages (그릴리지 구조의 소성 붕괴 설계)

  • 김윤영;박제웅
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2000.04a
    • /
    • pp.96-103
    • /
    • 2000
  • This research is a new design method, which will be presented as a basic concept for a more efficient minimum weight design of grillages, as an attempt to describe true collapse mechanism in as overall search as possible. It serves as introduction to the numerical technique of Linear Programming(LP) and Automatic Modified Direct Plastic Frame Analysis(AMDPFA). Attention is directed to both analysis and design, and emphasis is placed on the physical significance of Systematic Searching Techniques(SST) involved. In weight minimum grillages design, the parameterisation study in optimum beam configuration which was carried out over the range of beam sections for a given plastic section modulus likely to occur in structures by suing an adaptive stochastic optimisation technique, Genetic Algorithms.

  • PDF

Genetic Algorithms as Optimisation Tools and Their Applications (최적화기법으로서의 유전알고리즘과 그 응용)

  • 진강규;하주식
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.21 no.2
    • /
    • pp.108-116
    • /
    • 1997
  • 유전알고리즘은 진화원리에서 발견된 몇몇 특징들을 컴퓨터 알고리즘과 결합시켜 복잡한 최적화 문제를 해결하려는 도구로서 1975년 미국의 Holland 교수에 의해 처음으로 개발되었다. 주어진 문제에서 탐색환경이 다변수 또는 다봉(multi-modal)이 되어 대단히 복잡하거나 또는 부분적으로 알려질 경우는, 구배(gradient)에 기초한 재래식 방법을 사용하여 최적화하는 것은 매우 어렵게 되고 경우에 따라서는 불가능할 수도 있다. 이러한 이유로 유전알고리즘과 같은 강인한 탐색법이 요구된다. 유전알고리즘의 장점은 연속성(continuity), 미분가능성(differentiability), 단봉성(unimodality) 등과 같이 탐색공간에 대한 제약으로부터 자유롭다는 것이다. 다시 말하면 목적함수 외 탐색공간에 대한 사전지식을 필요로 하지 않고, 매우 크고 복잡한 공간일지라도 전역해 쪽으로 수렴해 갈수 있다는 것이다. 이러한 특성 때문에 유전알고리즘은 실제 환경에서 많은 복잡한 최적화 문제를 해결하는 방법으로 인정을 받고 있으며, 함수의 최적화, 신경회로망의 학습, 동적시스템의 식별및 제어, 신호처리등 여러 분야에 성공적으로 응용되고 있다. 이러한 중요성에 비해 유전알고리즘에 대한 연구는 국내적으로는 아직 미진한 수준이나 최근 이에 대한 관심이 고조되고 있으며, 또한 그 응용분야도 점점 넓어져 이론 개발과 실질적인 응용에 확산되리라 생각된다. 따라서 본 해설기사는 유전알고리즘의 원리와 응용 사례를 살펴봄으로서 최적화 문제를 해결하려는 독자들에게 조금이나마 도움을 주고자 한다.

  • PDF

Heuristic Search Method for Cost-optimized Computer Remanufacturing (복수의 중고 컴퓨터 재조립 비용 최소화를 위한 휴리스틱 탐색 알고리즘)

  • Jun, Hong-Bae;Sohn, Gapsu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.4
    • /
    • pp.98-109
    • /
    • 2012
  • Recently, the optimisation of end-of-life (EOL) product remanufacturing processes has been highlighted. In particular, computer remanufacturing becomes important as the amount of disposed of computers is rapidly increasing. At the computer remanufacturing, depending on the selections of used computer parts, the value of remanufactured computers will be different. Hence, it is important to select appropriate computer parts at the reassembly. To this end, this study deals with a decision making problem to select the best combination of computer parts for minimising the total remanufacturing computer cost. This problem is formulated with an integer nonlinear programming model and heuristic search algorithms are proposed to resolve it.

Optimisation of pipeline route in the presence of obstacles based on a least cost path algorithm and laplacian smoothing

  • Kang, Ju Young;Lee, Byung Suk
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.9 no.5
    • /
    • pp.492-498
    • /
    • 2017
  • Subsea pipeline route design is a crucial task for the offshore oil and gas industry, and the route selected can significantly affect the success or failure of an offshore project. Thus, it is essential to design pipeline routes to be eco-friendly, economical and safe. Obstacle avoidance is one of the main problems that affect pipeline route selection. In this study, we propose a technique for designing an automatic obstacle avoidance. The Laplacian smoothing algorithm was used to make automatically generated pipeline routes fairer. The algorithms were fast and the method was shown to be effective and easy to use in a simple set of case studies.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

Classroom Roll-Call System Based on ResNet Networks

  • Zhu, Jinlong;Yu, Fanhua;Liu, Guangjie;Sun, Mingyu;Zhao, Dong;Geng, Qingtian;Su, Jinbo
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1145-1157
    • /
    • 2020
  • A convolution neural networks (CNNs) has demonstrated outstanding performance compared to other algorithms in the field of face recognition. Regarding the over-fitting problem of CNN, researchers have proposed a residual network to ease the training for recognition accuracy improvement. In this study, a novel face recognition model based on game theory for call-over in the classroom was proposed. In the proposed scheme, an image with multiple faces was used as input, and the residual network identified each face with a confidence score to form a list of student identities. Face tracking of the same identity or low confidence were determined to be the optimisation objective, with the game participants set formed from the student identity list. Game theory optimises the authentication strategy according to the confidence value and identity set to improve recognition accuracy. We observed that there exists an optimal mapping relation between face and identity to avoid multiple faces associated with one identity in the proposed scheme and that the proposed game-based scheme can reduce the error rate, as compared to the existing schemes with deeper neural network.

Exploring Support Vector Machine Learning for Cloud Computing Workload Prediction

  • ALOUFI, OMAR
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.374-388
    • /
    • 2022
  • Cloud computing has been one of the most critical technology in the last few decades. It has been invented for several purposes as an example meeting the user requirements and is to satisfy the needs of the user in simple ways. Since cloud computing has been invented, it had followed the traditional approaches in elasticity, which is the key characteristic of cloud computing. Elasticity is that feature in cloud computing which is seeking to meet the needs of the user's with no interruption at run time. There are traditional approaches to do elasticity which have been conducted for several years and have been done with different modelling of mathematical. Even though mathematical modellings have done a forward step in meeting the user's needs, there is still a lack in the optimisation of elasticity. To optimise the elasticity in the cloud, it could be better to benefit of Machine Learning algorithms to predict upcoming workloads and assign them to the scheduling algorithm which would achieve an excellent provision of the cloud services and would improve the Quality of Service (QoS) and save power consumption. Therefore, this paper aims to investigate the use of machine learning techniques in order to predict the workload of Physical Hosts (PH) on the cloud and their energy consumption. The environment of the cloud will be the school of computing cloud testbed (SoC) which will host the experiments. The experiments will take on real applications with different behaviours, by changing workloads over time. The results of the experiments demonstrate that our machine learning techniques used in scheduling algorithm is able to predict the workload of physical hosts (CPU utilisation) and that would contribute to reducing power consumption by scheduling the upcoming virtual machines to the lowest CPU utilisation in the environment of physical hosts. Additionally, there are a number of tools, which are used and explored in this paper, such as the WEKA tool to train the real data to explore Machine learning algorithms and the Zabbix tool to monitor the power consumption before and after scheduling the virtual machines to physical hosts. Moreover, the methodology of the paper is the agile approach that helps us in achieving our solution and managing our paper effectively.