• Title/Summary/Keyword: partitioning approach

Search Result 160, Processing Time 0.024 seconds

Systems Engineering Approach for the Reuse of Metallic Waste From NPP Decommissioning and Dose Evaluation (금속해체 폐기물의 재활용을 위한 시스템엔지니어링 방법론 적용 및 피폭선량 평가)

  • Seo, Hyung-Woo;Kim, Chang-Lak
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.15 no.1
    • /
    • pp.45-63
    • /
    • 2017
  • The oldest commercial reactor in South Korea, Kori-1 Nuclear Power Plant (NPP), will be shut down in 2017. Proper treatment for decommissioning wastes is one of the key factors to decommission a plant successfully. Particularly important is the recycling of clearance level or very low level radioactively contaminated metallic wastes, which contributes to waste minimization and the reduction of disposal volume. The aim of this study is to introduce a conceptual design of a recycle system and to evaluate the doses incurred through defined work flows. The various architecture diagrams were organized to define operational procedures and tasks. Potential exposure scenarios were selected in accordance with the recycle system, and the doses were evaluated with the RESRAD-RECYCLE computer code. By using this tool, the important scenarios and radionuclides as well as impacts of radionuclide characteristics and partitioning factors are analyzed. Moreover, dose analysis can be used to provide information on the necessary decontamination, radiation protection process, and allowable concentration limits for exposure scenarios.

An Efficient VM-Level Scaling Scheme in an IaaS Cloud Computing System: A Queueing Theory Approach

  • Lee, Doo Ho
    • International Journal of Contents
    • /
    • v.13 no.2
    • /
    • pp.29-34
    • /
    • 2017
  • Cloud computing is becoming an effective and efficient way of computing resources and computing service integration. Through centralized management of resources and services, cloud computing delivers hosted services over the internet, such that access to shared hardware, software, applications, information, and all resources is elastically provided to the consumer on-demand. The main enabling technology for cloud computing is virtualization. Virtualization software creates a temporarily simulated or extended version of computing and network resources. The objectives of virtualization are as follows: first, to fully utilize the shared resources by applying partitioning and time-sharing; second, to centralize resource management; third, to enhance cloud data center agility and provide the required scalability and elasticity for on-demand capabilities; fourth, to improve testing and running software diagnostics on different operating platforms; and fifth, to improve the portability of applications and workload migration capabilities. One of the key features of cloud computing is elasticity. It enables users to create and remove virtual computing resources dynamically according to the changing demand, but it is not easy to make a decision regarding the right amount of resources. Indeed, proper provisioning of the resources to applications is an important issue in IaaS cloud computing. Most web applications encounter large and fluctuating task requests. In predictable situations, the resources can be provisioned in advance through capacity planning techniques. But in case of unplanned and spike requests, it would be desirable to automatically scale the resources, called auto-scaling, which adjusts the resources allocated to applications based on its need at any given time. This would free the user from the burden of deciding how many resources are necessary each time. In this work, we propose an analytical and efficient VM-level scaling scheme by modeling each VM in a data center as an M/M/1 processor sharing queue. Our proposed VM-level scaling scheme is validated via a numerical experiment.

A Change Detection Technique Supporting Nested Blank Nodes of RDF Documents (내포된 공노드를 포함하는 RDF 문서의 변경 탐지 기법)

  • Lee, Dong-Hee;Im, Dong-Hyuk;Kim, Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.6
    • /
    • pp.518-527
    • /
    • 2007
  • It is an important issue to find out the difference between RDF documents, because RDF documents are changed frequently. When RDF documents contain blank nodes, we need a matching technique for blank nodes in the change detection. Blank nodes have a nested form and they are used in most RDF documents. A RDF document can be modeled as a graph and it will contain many subtrees. We can consider a change detection problem as a minimum cost tree matching problem. In this paper, we propose a change detection technique for RDF documents using the labeling scheme for blank nodes. We also propose a method for improving the efficiency of general triple matching, which used predicate grouping and partitioning. In experiments, we showed that our approach was more accurate and faster than the previous approaches.

A 200-MHZ@2.5-V Dual-Mode Multiplier for Single / Double -Precision Multiplications (단정도/배정도 승산을 위한 200-MHZ@2.5-V 이중 모드 승산기)

  • 이종남;박종화;신경욱
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.5
    • /
    • pp.1143-1150
    • /
    • 2000
  • A dual-mode multiplier (DMM) that performs single- and double-precision multiplications has been designed using a $0.25-\mum$ 5-metal CMOS technology. An algorithm for efficiently implementing double-precision multiplication with a single-precision multiplier was proposed, which is based on partitioning double-precision multiplication into four single-precision sub-multiplications and computing them with sequential accumulations. When compared with conventional double-precision multipliers, our approach reduces the hardware complexity by about one third resulting in small silicon area and low-power dissipation at the expense of increased latency and throughput cycles. The DMM consists of a $28-b\times28-b$ single-precision multiplier designed using radix-4 Booth receding and redundant binary (RB) arithmetic, an accumulator and a simple control logic for mode selection. It contains about 25,000 transistors on the area of about $0.77\times0.40-m^2$. The HSPICE simulation results show that the DMM core can safely operate with 200-MHZ clock at 2.5-V, and its estimated power dissipation is about 130-㎽ at double-precision mode.

  • PDF

Integrity Assessment Models for Bridge Structures Using Fuzzy Decision-Making (퍼지의사결정을 이용한 교량 구조물의 건전성평가 모델)

  • 안영기;김성칠
    • Journal of the Korea Concrete Institute
    • /
    • v.14 no.6
    • /
    • pp.1022-1031
    • /
    • 2002
  • This paper presents efficient models for bridge structures using CART-ANFIS (classification and regression tree-adaptive neuro fuzzy inference system). A fuzzy decision tree partitions the input space of a data set into mutually exclusive regions, each region is assigned a label, a value, or an action to characterize its data points. Fuzzy decision trees used for classification problems are often called fuzzy classification trees, and each terminal node contains a label that indicates the predicted class of a given feature vector. In the same vein, decision trees used for regression problems are often called fuzzy regression trees, and the terminal node labels may be constants or equations that specify the predicted output value of a given input vector. Note that CART can select relevant inputs and do tree partitioning of the input space, while ANFIS refines the regression and makes it continuous and smooth everywhere. Thus it can be seen that CART and ANFIS are complementary and their combination constitutes a solid approach to fuzzy modeling.

Heuristics for Selecting Nodes on Cable TV Network (케이블 TV 망에서 노드 선택을 위한 휴리스틱 연구)

  • Chong, Kyun-Rak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.133-140
    • /
    • 2008
  • The cable TV network has delivered downward broadcasting signals from distribution centers to subscribers. Since the traditional coaxial cable has been upgraded by the Hybrid Fiber Coaxial(HFC) cable, the upward channels has expanded broadband services such as Internet. This upward channel is vulnerable to ingress noises. When the noises from the children nodes accumulated in an amplifier exceeds a certain level, that node has to be cut off to prevent the noise propagation. The node selection problem(NSP) is defined to select nodes so that the noise in each node does not exceed the given threshold value and the sum of Profits of selected nodes can be maximized. The NSP has shown to be NP-hard. In this paper, we have proposed heuristics to find the near-optimal solution for NSP. The experimental results show that interval partitioning is better than greedy approach. Our heuristics can be used by the HFC network management system to provide privileged services to the premium subscribers on HFC networks.

  • PDF

Combined Artificial Bee Colony for Data Clustering (융합 인공벌군집 데이터 클러스터링 방법)

  • Kang, Bum-Su;Kim, Sung-Soo
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • Data clustering is one of the most difficult and challenging problems and can be formally considered as a particular kind of NP-hard grouping problems. The K-means algorithm is one of the most popular and widely used clustering method because it is easy to implement and very efficient. However, it has high possibility to trap in local optimum and high variation of solutions with different initials for the large data set. Therefore, we need study efficient computational intelligence method to find the global optimal solution in data clustering problem within limited computational time. The objective of this paper is to propose a combined artificial bee colony (CABC) with K-means for initialization and finalization to find optimal solution that is effective on data clustering optimization problem. The artificial bee colony (ABC) is an algorithm motivated by the intelligent behavior exhibited by honeybees when searching for food. The performance of ABC is better than or similar to other population-based algorithms with the added advantage of employing fewer control parameters. Our proposed CABC method is able to provide near optimal solution within reasonable time to balance the converged and diversified searches. In this paper, the experiment and analysis of clustering problems demonstrate that CABC is a competitive approach comparing to previous partitioning approaches in satisfactory results with respect to solution quality. We validate the performance of CABC using Iris, Wine, Glass, Vowel, and Cloud UCI machine learning repository datasets comparing to previous studies by experiment and analysis. Our proposed KABCK (K-means+ABC+K-means) is better than ABCK (ABC+K-means), KABC (K-means+ABC), ABC, and K-means in our simulations.

Bayesian analysis of finite mixture model with cluster-specific random effects (군집 특정 변량효과를 포함한 유한 혼합 모형의 베이지안 분석)

  • Lee, Hyejin;Kyung, Minjung
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.57-68
    • /
    • 2017
  • Clustering algorithms attempt to find a partition of a finite set of objects in to a potentially predetermined number of nonempty subsets. Gibbs sampling of a normal mixture of linear mixed regressions with a Dirichlet prior distribution calculates posterior probabilities when the number of clusters was known. Our approach provides simultaneous partitioning and parameter estimation with the computation of classification probabilities. A Monte Carlo study of curve estimation results showed that the model was useful for function estimation. Examples are given to show how these models perform on real data.

Precise-Optimal Frame Length Based Collision Reduction Schemes for Frame Slotted Aloha RFID Systems

  • Dhakal, Sunil;Shin, Seokjoo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.1
    • /
    • pp.165-182
    • /
    • 2014
  • An RFID systems employ efficient Anti-Collision Algorithms (ACAs) to enhance the performance in various applications. The EPC-Global G2 RFID system utilizes Frame Slotted Aloha (FSA) as its ACA. One of the common approaches used to maximize the system performance (tag identification efficiency) of FSA-based RFID systems involves finding the optimal value of the frame length relative to the contending population size of the RFID tags. Several analytical models for finding the optimal frame length have been developed; however, they are not perfectly optimized because they lack precise characterization for the timing details of the underlying ACA. In this paper, we investigate this promising direction by precisely characterizing the timing details of the EPC-Global G2 protocol and use it to derive a precise-optimal frame length model. The main objective of the model is to determine the optimal frame length value for the estimated number of tags that maximizes the performance of an RFID system. However, because precise estimation of the contending tags is difficult, we utilize a parametric-heuristic approach to maximize the system performance and propose two simple schemes based on the obtained optimal frame length-namely, Improved Dynamic-Frame Slotted Aloha (ID-FSA) and Exponential Random Partitioning-Frame Slotted Aloha (ERP-FSA). The ID-FSA scheme is based on the tag set estimation and frame size update mechanisms, whereas the ERP-FSA scheme adjusts the contending tag population in such a way that the applied frame size becomes optimal. The results of simulations conducted indicate that the ID-FSA scheme performs better than several well-known schemes in various conditions, while the ERP-FSA scheme performs well when the frame size is small.

Cloudification of On-Chip Flash Memory for Reconfigurable IoTs using Connected-Instruction Execution (연결기반 명령어 실행을 이용한 재구성 가능한 IoT를 위한 온칩 플래쉬 메모리의 클라우드화)

  • Lee, Dongkyu;Cho, Jeonghun;Park, Daejin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.2
    • /
    • pp.103-111
    • /
    • 2019
  • The IoT-driven large-scaled systems consist of connected things with on-chip executable embedded software. These light-weighted embedded things have limited hardware space, especially small size of on-chip flash memory. In addition, on-chip embedded software in flash memory is not easy to update in runtime to equip with latest services in IoT-driven applications. It is becoming important to develop light-weighted IoT devices with various software in the limited on-chip flash memory. The remote instruction execution in cloud via IoT connectivity enables to provide high performance software execution with unlimited software instruction in cloud and low-power streaming of instruction execution in IoT edge devices. In this paper, we propose a Cloud-IoT asymmetric structure for providing high performance instruction execution in cloud, still low power code executable thing in light-weighted IoT edge environment using remote instruction execution. We propose a simulated approach to determine efficient partitioning of software runtime in cloud and IoT edge. We evaluated the instruction cloudification using remote instruction by determining the execution time by the proposed structure. The cloud-connected instruction set simulator is newly introduced to emulate the behavior of the processor. Experimental results of the cloud-IoT connected software execution using remote instruction showed the feasibility of cloudification of on-chip code flash memory. The simulation environment for cloud-connected code execution successfully emulates architectural operations of on-chip flash memory in cloud so that the various software services in IoT can be accelerated and performed in low-power by cloudification of remote instruction execution. The execution time of the program is reduced by 50% and the memory space is reduced by 24% when the cloud-connected code execution is used.