• Title/Summary/Keyword: Computing amount

Search Result 689, Processing Time 0.024 seconds

Parallel Computing Strategies for High-Speed Impact into Ceramic/Metal Plates (세라믹/금속판재의 고속충돌 파괴 유한요소 병렬 해석기법)

  • Moon, Ji-Joong;Kim, Seung-Jo;Lee, Min-Hyung
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.22 no.6
    • /
    • pp.527-532
    • /
    • 2009
  • In this paper simulations for the impact into ceramics and/or metal materials have been discussed. To model discrete nature for fracture and damage of brittle materials, we implemented cohesive-law fracture model with a node separation algorithm for the tensile failure and Mohr-Coulomb model for the compressive loading. The drawback of this scheme is that it requires a heavy computational time. This is because new nodes are generated continuously whenever a new crack surface is created. In order to reduce the amount of calculation, parallelization with MPI library has been implemented. For the high-speed impact problems, the mesh configuration and contact calculation changes continuously as time step advances and it causes unbalance of computational load of each processor. Dynamic load balancing technique which re-allocates the loading dynamically is used to achieve good parallel performance. Some impact problems have been simulated and the parallel performance and accuracy of the solutions are discussed.

Application of Recent Approximate Dynamic Programming Methods for Navigation Problems (주행문제를 위한 최신 근사적 동적계획법의 적용)

  • Min, Dae-Hong;Jung, Keun-Woo;Kwon, Ki-Young;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.737-742
    • /
    • 2011
  • Navigation problems include the task of determining the control input under various constraints for systems such as mobile robots subject to uncertain disturbance. Such tasks can be modeled as constrained stochastic control problems. In order to solve these control problems, one may try to utilize the dynamic programming(DP) methods which rely on the concept of optimal value function. However, in most real-world problems, this trial would give us many difficulties; for examples, the exact system model may not be known; the computation of the optimal control policy may be impossible; and/or a huge amount of computing resource may be in need. As a strategy to overcome the difficulties of DP, one can utilize ADP(approximate dynamic programming) methods, which find suboptimal control policies resorting to approximate value functions. In this paper, we apply recently proposed ADP methods to a class of navigation problems having complex constraints, and observe the resultant performance characteristics.

Systematic Network Coding for Computational Efficiency and Energy Efficiency in Wireless Body Area Networks (무선 인체 네트워크에서의 계산 효율과 에너지 효율 향상을 위한 시스테매틱 네트워크 코딩)

  • Kim, Dae-Hyeok;Suh, Young-Joo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.10A
    • /
    • pp.823-829
    • /
    • 2011
  • Recently, wireless body area network (WBAN) has received much attention as an application for the ubiquitous healthcare system. In WBAN, each sensor nodes and a personal base station such as PDA have an energy constraint and computation overhead should be minimized due to node's limited computing power and memory constraint. The reliable data transmission also must be guaranteed because it handles vital signals. In this paper, we propose a systematic network coding scheme for WBAN to reduce the network coding overhead as well as total energy consumption for completion the transmission. We model the proposed scheme using Markov chain. To minimize the total energy consumption for completing the data transmission, we made the problem as a minimization problem and find an optimal solution. Our simulation result shows that large amount of energy reduction is achieved by proposed systematic network coding. Also, the proposed scheme reduces the computational overhead of network coding imposed on each node by simplify the decoding process.

An Iterative Algorithm for the Bottom Up Computation of the Data Cube using MapReduce (맵리듀스를 이용한 데이터 큐브의 상향식 계산을 위한 반복적 알고리즘)

  • Lee, Suan;Jo, Sunhwa;Kim, Jinho
    • Journal of Information Technology and Architecture
    • /
    • v.9 no.4
    • /
    • pp.455-464
    • /
    • 2012
  • Due to the recent data explosion, methods which can meet the requirement of large data analysis has been studying. This paper proposes MRIterativeBUC algorithm which enables efficient computation of large data cube by distributed parallel processing with MapReduce framework. MRIterativeBUC algorithm is developed for efficient iterative operation of the BUC method with MapReduce, and overcomes the limitations about the storage size and processing ability caused by large data cube computation. It employs the idea from the iceberg cube which computes only the interesting aspect of analysts and the distributed parallel process of cube computation by partitioning and sorting. Thus, it reduces data emission so that it can reduce network overload, processing amount on each node, and eventually the cube computation cost. The bottom-up cube computation and iterative algorithm using MapReduce, proposed in this paper, can be expanded in various way, and will make full use of many applications.

A Recovery Scheme of Single Node Failure using Version Caching in Database Sharing Systems (데이타베이스 공유 시스템에서 버전 캐싱을 이용한 단일 노드 고장 회복 기법)

  • 조행래;정용석;이상호
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.409-421
    • /
    • 2004
  • A database sharing system (DSS) couples a number of computing nodes for high performance transaction processing, and each node in DSS shares database at the disk level. In case of node failures in DSS, database recovery algorithms are required to recover the database in a consistent state. A database recovery process in DSS takes rather longer time compared with single database systems, since it should include merging of discrete log records in several nodes and perform REDO tasks using the merged lo9 records. In this paper, we propose a two version caching (2VC) algorithm that improves the cache fusion algorithm introduced in Oracle 9i Real Application Cluster (ORAC). The 2VC algorithm can achieve faster database recovery by eliminating the use of merged log records in case of single node failure. Furthermore, it can improve the performance of normal transaction processing by reducing the amount of unnecessary disk force overhead that occurs in ORAC.

Parameter Derivation for Reducing ISI in 2-Dimensional Faster-than-Nyquist Transmission (나이퀴스트율보다 빠른 전송 시스템에서 ISI 감소를 위한 변수 도출 방법)

  • Kang, Donghoon;Kim, Haeun;Park, Kyeongwon;Oh, Wangrok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.10
    • /
    • pp.1147-1154
    • /
    • 2016
  • A faster-than-Nyquist (FTN) transmission scheme has been attracting great attention as a spectral efficient transmission scheme. In the FTN transmission scheme, modulated symbols are transmitted at a rate higher than Nyquist rate and thus, a performance loss due to the inter-symbol interference (ISI) is unavoidable. To minimize the performance loss in the FTN transmission scheme, parameters should be carefully optimized. Unfortunately, simulation-based parameter optimization requires significant amount of time and computing power, especially for 2-dimensional FTN systems. In this paper, we propose a 2-dimensional FTN transmission scheme using the optimized parameters based on numerical analysis and simulation results on the ISI. Compared with the conventional Nyquist system, the proposed 2-dimensional FTN transmission scheme not only offers virtually identical bit error performance but also offers higher spectral efficiency.

An Effective Control of Network Traffic using RTCP for Transmitting Video Streaming Data (비디오 스트리밍 데이타 전송시 RTCP를 이용한 효율적인 네트워크 트래픽 제어)

  • Park, Dae-Hoon;Hur, Hye-Sun;Hong, Youn-Sik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.328-335
    • /
    • 2002
  • When we want to transfer video streaming data through computer networks, we will have to be allocated a larger bandwidth compared to a general application. In general, it causes a serious network overload inevitably due to the limited bandwidth. In this paper, in order to resolve the problem, we haute taken a method for transmitting video streaming data using RTP and RTCP. With RR(Receiver Report) packet in RTCP we will test it to check whether the traffic in a network has occurred or not. If it happened, we haute tried to reduce the overall network traffic by dynamically changing the quantization factor of the Motion JPEG that is one of the encoding styles in JMF. When the ratio of the average of transmission for each session to the average of overall transmission is greater than 7%, we should adjust the amount of data to be transmitted for each session to reach the session mean values. The experimental results show that the proposed method taken here reduces the overload effectively and therefore improves the efficiency for transmitting video streaming data.

Event and Service registry system for USN Application Services (USN응용 서비스를 위한 이벤트 및 서비스 레지스트리 시스템)

  • Yeom, Sung-Kun;Kim, Yong-Woon;Yoo, Sang-Keun;Kim, Hyung-Jun;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.7
    • /
    • pp.1459-1466
    • /
    • 2009
  • In ubiquitous environment, various and amount of sensor data information occur continuously and periodically about thing and user from RHO, sensor network of various space. Such, the users from among many of provider who provide sensor service should find service that themselves is wanted. And when service use, service right to user should be provided. But in the current environment of the USN(Ubiquitous Sensor Network) Application Service, there is no the registry to find these services but only business- oriented UDDI(Universal Description Discovery, and integration). In addition, when service use, service right to user should be provided but current services provided only services one of the conditions or events. Therefore, rules-based processing system of service event that can provide service according various conditions and events is need. In this paper research into service registry for sensor service search with reference to existing UDDI and event processing system for USN application service with reference to event rule based on web service as WS-ECA(Web Service-Event Condition Action).

Compression Methods for Time Series Data using Discrete Cosine Transform with Varying Sample Size (가변 샘플 크기의 이산 코사인 변환을 활용한 시계열 데이터 압축 기법)

  • Moon, Byeongsun;Choi, Myungwhan
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.5
    • /
    • pp.201-208
    • /
    • 2016
  • Collection and storing of multiple time series data in real time requires large memory space. To solve this problem, the usage of varying sample size is proposed in the compression scheme using discrete cosine transform technique. Time series data set has characteristics such that a higher compression ratio can be achieved with smaller amount of value changes and lower frequency of the value changes. The coefficient of variation and the variability of the differences between adjacent data elements (VDAD) are presumed to be very good measures to represent the characteristics of the time series data and used as key parameters to determine the varying sample size. Test results showed that both VDAD-based and the coefficient of variation-based scheme generate excellent compression ratios. However, the former scheme uses much simpler sample size decision mechanism and results in better compression performance than the latter scheme.

Associated Keyword Recommendation System for Keyword-based Blog Marketing (키워드 기반 블로그 마케팅을 위한 연관 키워드 추천 시스템)

  • Choi, Sung-Ja;Son, Min-Young;Kim, Young-Hak
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.5
    • /
    • pp.246-251
    • /
    • 2016
  • Recently, the influence of SNS and online media is rapidly growing with a consequent increase in the interest of marketing using these tools. Blog marketing can increase the ripple effect and information delivery in marketing at low cost by prioritizing keyword search results of influential portal sites. However, because of the tough competition to gain top ranking of search results of specific keywords, long-term and proactive efforts are needed. Therefore, we propose a new method that recommends associated keyword groups with the possibility of higher exposure of the blog. The proposed method first collects the documents of blog including search results of target keyword, and extracts and filters keyword with higher association considering the frequency and location information of the word. Next, each associated keyword is compared to target keyword, and then associated keyword group with the possibility of higher exposure is recommended considering the information such as their association, search amount of associated keyword per month, the number of blogs including in search result, and average writhing date of blogs. The experiment result shows that the proposed method recommends keyword group with higher association.