• Title/Summary/Keyword: Hybrid Algorithm

Search Result 1,922, Processing Time 0.029 seconds

Computational estimation of the earthquake response for fibre reinforced concrete rectangular columns

  • Liu, Chanjuan;Wu, Xinling;Wakil, Karzan;Jermsittiparsert, Kittisak;Ho, Lanh Si;Alabduljabbar, Hisham;Alaskar, Abdulaziz;Alrshoudi, Fahed;Alyousef, Rayed;Mohamed, Abdeliazim Mustafa
    • Steel and Composite Structures
    • /
    • v.34 no.5
    • /
    • pp.743-767
    • /
    • 2020
  • Due to the impressive flexural performance, enhanced compressive strength and more constrained crack propagation, Fibre-reinforced concrete (FRC) have been widely employed in the construction application. Majority of experimental studies have focused on the seismic behavior of FRC columns. Based on the valid experimental data obtained from the previous studies, the current study has evaluated the seismic response and compressive strength of FRC rectangular columns while following hybrid metaheuristic techniques. Due to the non-linearity of seismic data, Adaptive neuro-fuzzy inference system (ANFIS) has been incorporated with metaheuristic algorithms. 317 different datasets from FRC column tests has been applied as one database in order to determine the most influential factor on the ultimate strengths of FRC rectangular columns subjected to the simulated seismic loading. ANFIS has been used with the incorporation of Particle Swarm Optimization (PSO) and Genetic algorithm (GA). For the analysis of the attained results, Extreme learning machine (ELM) as an authentic prediction method has been concurrently used. The variable selection procedure is to choose the most dominant parameters affecting the ultimate strengths of FRC rectangular columns subjected to simulated seismic loading. Accordingly, the results have shown that ANFIS-PSO has successfully predicted the seismic lateral load with R2 = 0.857 and 0.902 for the test and train phase, respectively, nominated as the lateral load prediction estimator. On the other hand, in case of compressive strength prediction, ELM is to predict the compressive strength with R2 = 0.657 and 0.862 for test and train phase, respectively. The results have shown that the seismic lateral force trend is more predictable than the compressive strength of FRC rectangular columns, in which the best results belong to the lateral force prediction. Compressive strength prediction has illustrated a significant deviation above 40 Mpa which could be related to the considerable non-linearity and possible empirical shortcomings. Finally, employing ANFIS-GA and ANFIS-PSO techniques to evaluate the seismic response of FRC are a promising reliable approach to be replaced for high cost and time-consuming experimental tests.

Integration of Ontology Open-World and Rule Closed-World Reasoning (온톨로지 Open World 추론과 규칙 Closed World 추론의 통합)

  • Choi, Jung-Hwa;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.282-296
    • /
    • 2010
  • OWL is an ontology language for the Semantic Web, and suited to modelling the knowledge of a specific domain in the real-world. Ontology also can infer new implicit knowledge from the explicit knowledge. However, the modeled knowledge cannot be complete as the whole of the common-sense of the human cannot be represented totally. Ontology do not concern handling nonmonotonic reasoning to detect incomplete modeling such as the integrity constraints and exceptions. A default rule can handle the exception about a specific class in ontology. Integrity constraint can be clear that restrictions on class define which and how many relationships the instances of that class must hold. In this paper, we propose a practical reasoning system for open and closed-world reasoning that supports a novel hybrid integration of ontology based on open world assumption (OWA) and non-monotonic rule based on closed-world assumption (CWA). The system utilizes a method to solve the problem which occurs when dealing with the incomplete knowledge under the OWA. The method uses the answer set programming (ASP) to find a solution. ASP is a logic-program, which can be seen as the computational embodiment of non-monotonic reasoning, and enables a query based on CWA to knowledge base (KB) of description logic. Our system not only finds practical cases from examples by the Protege, which require non-monotonic reasoning, but also estimates novel reasoning results for the cases based on KB which realizes a transparent integration of rules and ontologies supported by some well-known projects.

A Hybrid Knowledge Representation Method for Pedagogical Content Knowledge (교수내용지식을 위한 하이브리드 지식 표현 기법)

  • Kim, Yong-Beom;Oh, Pill-Wo;Kim, Yung-Sik
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.369-386
    • /
    • 2005
  • Although Intelligent Tutoring System(ITS) offers individualized learning environment that overcome limited function of existent CAI, and consider many learners' variable, there is little development to be using at the sites of schools because of inefficiency of investment and absence of pedagogical content knowledge representation techniques. To solve these problem, we should study a method, which represents knowledge for ITS, and which reuses knowledge base. On the pedagogical content knowledge, the knowledge in education differs from knowledge in a general sense. In this paper, we shall primarily address the multi-complex structure of knowledge and explanation of learning vein using multi-complex structure. Multi-Complex, which is organized into nodes, clusters and uses by knowledge base. In addition, it grows a adaptive knowledge base by self-learning. Therefore, in this paper, we propose the 'Extended Neural Logic Network(X-Neuronet)', which is based on Neural Logic Network with logical inference and topological inflexibility in cognition structure, and includes pedagogical content knowledge and object-oriented conception, verify validity. X-Neuronet defines that a knowledge is directive combination with inertia and weights, and offers basic conceptions for expression, logic operator for operation and processing, node value and connection weight, propagation rule, learning algorithm.

  • PDF

Implementation of Massive FDTD Simulation Computing Model Based on MPI Cluster for Semi-conductor Process (반도체 검증을 위한 MPI 기반 클러스터에서의 대용량 FDTD 시뮬레이션 연산환경 구축)

  • Lee, Seung-Il;Kim, Yeon-Il;Lee, Sang-Gil;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.9
    • /
    • pp.21-28
    • /
    • 2015
  • In the semi-conductor process, a simulation process is performed to detect defects by analyzing the behavior of the impurity through the physical quantity calculation of the inner element. In order to perform the simulation, Finite-Difference Time-Domain(FDTD) algorithm is used. The improvement of semiconductor which is composed of nanoscale elements, the size of simulation is getting bigger. Problems that a processor such as CPU or GPU cannot perform the simulation due to the massive size of matrix or a computer consist of multiple processors cannot handle a massive FDTD may come up. For those problems, studies are performed with parallel/distributed computing. However, in the past, only single type of processor was used. In GPU's case, it performs fast, but at the same time, it has limited memory. On the other hand, in CPU, it performs slower than that of GPU. To solve the problem, we implemented a computing model that can handle any FDTD simulation regardless of size on the cluster which consist of heterogeneous processors. We tested the simulation on processors using MPI libraries which is based on 'point to point' communication and verified that it operates correctly regardless of the number of node and type. Also, we analyzed the performance by measuring the total execution time and specific time for the simulation on each test.

Fine Grained Resource Scaling Approach for Virtualized Environment (가상화 환경에서 세밀한 자원 활용률 적용을 위한 스케일 기법)

  • Lee, Donhyuck;Oh, Sangyoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.11-21
    • /
    • 2013
  • Recently operating a large scale computing resource like a data center becomes easier because of the virtualization technology that virtualize servers and enable flexible resource provision. The most of public cloud services provides automatic scaling in the form of scale-in or scale-out and these scaling approaches works well to satisfy the service level agreement (SLA) of users. However, a novel scaling approach is required to operate private clouds that has smaller amount of computing resources than vast resources of public clouds. In this paper, we propose a hybrid server scaling architecture and related algorithms using both scale-in and scale-out to achieve higher resource utilization rate for private clouds. We uses dynamic resource allocation and live migration to run our proposed algorithm. Our propose system aims to provide a fine-grain resource scaling by steps. Thus private cloud systems are able to keep stable service and to reduce server management cost by optimizing server utilization. The experiment results show that our proposed approach performs better in resource utilization than the scale-out approach based on the number of users.

Optimal Design of Network-on-Chip Communication Sturcture (Network-on-Chip에서의 최적 통신구조 설계)

  • Yoon, Joo-Hyeong;Hwang, Young-Si;Chung, Ki-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.8
    • /
    • pp.80-88
    • /
    • 2007
  • High adaptability and scalability are two critical issues in implementing a very complex system in a single chip. To obtain high adaptability and scalability, novel system design methodology known as communication-based system design has gained large attention from SoC designers. NoC (Network-on-Chip) is such an on-chip communication-based design approach for the next generation SoC design. To provide high adaptability and scalability, NoCs employ network interfaces and routers as their main communication structures and transmit and receive packetized data over such structures. However, data packetization, and routing overhead in terms of run time and area may cost too much compared with conventional SoC communication structure. Therefore, in this research, we propose a novel methodology which automatically generates a hybrid communication structure. In this work, we map traditional pin-to-pin wiring structure for frequent and timing critical communication, and map flexible and scalable structure for infrequent, or highly variable communication patterns. Even though, we simplify the communication structure significantly through our algorithm the connectivity or the scalability of the communication modules are almost maintained as the original NoC design. Using this method, we could improve the timing performance by 49.19%, and the area taken by the communication structure has been reduced by 24.03%.

Hybrid Watermarking Technique using DWT Subband Structure and Spatial Edge Information (DWT 부대역구조와 공간 윤곽선정보를 이용한 하이브리드 워터마킹 기술)

  • 서영호;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.706-715
    • /
    • 2004
  • In this paper, to decide the watermark embedding positions and embed the watermark we use the subband tee structure which is presented in the wavelet domain and the edge information in the spatial domain. The significant frequency region is estimated by the subband searching from the higher frequency subband to the lower frequency subband. LH1 subband which has the higher frequency in tree structure of the wavelet domain is divided into 4${\times}$4 submatrices, and the threshold which is used in the watermark embedding is obtained by the blockmatrix which is consists by the average of 4${\times}$4 submatrices. Also the watermark embedding position, Keymap is generated by the blockmatrix for the energy distribution in the frequency domain and the edge information in the spatial domain. The watermark is embedded into the wavelet coefficients using the Keymap and the random sequence generated by LFSR(Linear feedback shift register). Finally after the inverse wavelet transform the watermark embedded image is obtained. the proposed watermarking algorithm showed PSNR over 2㏈ and had the higher results from 2% to 8% in the comparison with the previous research for the attack such as the JPEG compression and the general image processing just like blurring, sharpening and gaussian noise.

Real-Time Face Recognition Based on Subspace and LVQ Classifier (부분공간과 LVQ 분류기에 기반한 실시간 얼굴 인식)

  • Kwon, Oh-Ryun;Min, Kyong-Pil;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.19-32
    • /
    • 2007
  • This paper present a new face recognition method based on LVQ neural net to construct a real time face recognition system. The previous researches which used PCA, LDA combined neural net usually need much time in training neural net. The supervised LVQ neural net needs much less time in training and can maximize the separability between the classes. In this paper, the proposed method transforms the input face image by PCA and LDA sequentially into low-dimension feature vectors and recognizes the face through LVQ neural net. In order to make the system robust to external light variation, light compensation is performed on the detected face by max-min normalization method as preprocessing. PCA and LDA transformations are applied to the normalized face image to produce low-level feature vectors of the image. In order to determine the initial centers of LVQ and speed up the convergency of the LVQ neural net, the K-Means clustering algorithm is adopted. Subsequently, the class representative vectors can be produced by LVQ2 training using initial center vectors. The face recognition is achieved by using the euclidean distance measure between the center vector of classes and the feature vector of input image. From the experiments, we can prove that the proposed method is more effective in the recognition ratio for the cases of still images from ORL database and sequential images rather than using conventional PCA of a hybrid method with PCA and LDA.

  • PDF

A Coevolution of Artificial-Organism Using Classification Rule And Enhanced Backpropagation Neural Network (분류규칙과 강화 역전파 신경망을 이용한 이종 인공유기체의 공진화)

  • Cho Nam-Deok;Kim Ki-Tae
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.349-356
    • /
    • 2005
  • Artificial Organism-used application areas are expanding at a break-neck speed with a view to getting things done in a dynamic and Informal environment. A use of general programming or traditional hi methods as the representation of Artificial Organism behavior knowledge in these areas can cause problems related to frequent modifications and bad response in an unpredictable situation. Strategies aimed at solving these problems in a machine-learning fashion includes Genetic Programming and Evolving Neural Networks. But the learning method of Artificial-Organism is not good yet, and can't represent life in the environment. With this in mind, this research is designed to come up with a new behavior evolution model. The model represents behavior knowledge with Classification Rules and Enhanced Backpropation Neural Networks and discriminate the denomination. To evaluate the model, the researcher applied it to problems with the competition of Artificial-Organism in the Simulator and compared with other system. The survey shows that the model prevails in terms of the speed and Qualify of learning. The model is characterized by the simultaneous learning of classification rules and neural networks represented on chromosomes with the help of Genetic Algorithm and the consolidation of learning ability caused by the hybrid processing of the classification rules and Enhanced Backpropagation Neural Network.

Hybrid Multicast/Broadcast Algorithm for Highly-Demanded Video Services with Low Complexity (Highly-Demanded 비디오 서비스를 위한 낮은 복잡도의 혼합 멀티캐스트/브로드캐스트 알고리즘)

  • Li, Can;Bahk, Sae-Woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.1B
    • /
    • pp.101-110
    • /
    • 2011
  • With the deployment of broadband networking technology, many clients are enabled to receive various Video on Demand (VoD) services. To support many clients, the network should be designed by considering the following factors: viewer's waiting time, buffer requirement at each client, number of channel required for video delivery, and video segmentation complexity. Among the currently available VoD service approaches, the Polyharmonic and Staircase broadcasting approaches show best performance with respect to each viewer's waiting time and buffer requirement, respectively. However, these approaches have the problem of dividing a video into too many segments, which causes very many channels to be managed and used at a time. To overcome this problem, we propose Polyharmonic-Staircase-Staggered (PSS) broadcasting approach that uses the Polyharmonic and Staircase approaches for the head part transmission and the Staggered approach for the tail part transmission. It is simple and bandwidth efficient. The numerical results demonstrate that our approach shows viewer's waiting time is comparable to that in the Harmonic approach with a slight increase in the bandwidth requirement, and saves the buffer requirement by about 60\% compared to the Harmonic broadcasting approach by simply adjusting the video partitioning coefficient factor. More importantly, our approach shows the best performance in terms of the number of segments and the number of channels managed and used simultaneously, which is a critical factor in real operation of VoD services. Lastly, we present how to configure the system adaptively according to the video partitioning coefficient.