• Title/Summary/Keyword: Low Computational Complexity

Search Result 492, Processing Time 0.018 seconds

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.