• Title/Summary/Keyword: Computation Efficiency

Search Result 790, Processing Time 0.027 seconds

A Review on Improvements of Climate Change Vulnerability Analysis Methods : Focusing on Sea Level Rise Disasters (도시 기후변화 재해취약성분석 방법의 개선방안 검토 : 해수면상승 재해를 중심으로)

  • Kim, Ji-Sook;Kim, Ho-Yong;Lee, Sung-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.1
    • /
    • pp.50-60
    • /
    • 2014
  • The purpose of this study is to identify characteristics and improvements of the climate change vulnerability analysis methods to build a safe city from disasters. For this, an empirical analysis on sea level rise disasters was performed focusing on Heaundae-gu in Busan. For the analysis, Census output areas and Dongs were set as analysis unit and their disaster vulnerability was analyzed. Improvements were reviewed through the comparison and review of analysis process and results. According to analysis results, Modifiable Areal Unit Problem(MAUP) which gives different results according to aggregate unit occurs. Improvements were induced by analysis process, and it was found that in spatial unit setting stage that becomes the base of analysis, analysis unit adjustment, score computation method adjustment, and clearer analysis method for each disaster type would be needed. In analysis execution stage, it was thought that weighting according to variables, diversification of variables, and exclusion of subjective analysis selection method would be needed. It is expected that accurate the total disaster vulnerability analysis will be the base for the improvement of efficiency in urban resilience responding to future weather changes.

A Study on Shape Optimum Design for Stability of Elastic Structures (탄성 구조물의 안정성을 고려한 형상최적설계)

  • Yang, Wook-Jin;Choi, Joo-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.20 no.1
    • /
    • pp.75-82
    • /
    • 2007
  • This paper addresses a method for shape optimization of a continuous elastic body considering stability, i.e., buckling behavior. The sensitivity formula for critical load is analytically derived and expressed in terms of shape variation, based on the continuum formulation of the stability problem. Unlike the conventional finite difference method (FDM), this method is efficient in that only a couple of analyses are required regardless of the number of design parameters. Commercial software such as ANSYS can be employed since the method requires only the result of the analysis in computation of the sensitivity. Though the buckling problem is more efficiently solved by structural elements such as a beam and shell, elastic solids have been chosen for the buckling analysis because solid elements can generally be used for any kind of structure whether it is thick or thin. Sensitivity is then computed by using the mathematical package MATLAB with the initial stress and buckling analysis of ANSYS. Several problems we chosen in order to illustrate the efficiency of the presented method. They are applied to the shape optimization problems to minimize weight under allowed critical loads and to maximize critical loads under same volume.

Computation of Optimal Path for Pedestrian Reflected on Mode Choice of Public Transportation in Transfer Station (대중교통 수단선택과 연계한 복합환승센터 내 보행자 최적경로 산정)

  • Yoon, Sang-Won;Bae, Sang-Hoon
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.2
    • /
    • pp.45-56
    • /
    • 2007
  • As function and scale of the transit center get larger, the efficient guidance system in the transit center is essential for transit users in order to find their efficient routes. Although there are several studies concerning optimal path for the road, but insufficient studies are executed about optimal path inside the building. Thus, this study is to develop the algorithm about optimal path for car owner from the basement parking lot to user's destination in the transfer station. Based on Dijkstra algorithm which calculate horizontal distance, several factors such as fatigue, freshness, preference, and required time in using moving devices are objectively computed through rank-sum and arithmetic-sum method. Moreover, optimal public transportation is provided for transferrer in the transfer station by Neuro-Fuzzy model which is reflected on people's tendency about public transportation mode choice. Lastly, some scenarios demonstrate the efficiency of optimal path algorithm for pedestrian in this study. As a result of verification the case through the model developed in this study is 75 % more effective in the scenario reflected on different vertical distance, and $24.5\;{\sim}\;107.7\;%$ more effective in the scenario considering different horizontal distance, respectively.

  • PDF

Real-time Fluid Animation using Particle Dynamics Simulation and Pre-integrated Volume Rendering (입자 동역학 시뮬레이션과 선적분 볼륨 렌더링을 이용한 실시간 유체 애니메이션)

  • Lee Jeongjin;Kang Moon Koo;Kim Dongho;Shin Yeong Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.29-38
    • /
    • 2005
  • The fluid animation procedure consists of physical simulation and visual rendering. In the physical simulation of fluids, the most frequently used practices are the numerical simulation of fluid particles using particle dynamics equations and the continuum analysis of flow via Wavier-Stokes equation. Particle dynamics method is fast in calculation, but the resulting fluid motion is conditionally unrealistic The method using Wavier-Stokes equation, on the contrary, yields lifelike fluid motion when properly conditioned, yet the complexity of calculation restrains this method from being used in real-time applications. Global illumination is generally successful in producing premium-Duality rendered images, but is also excessively slow for real-time applications. In this paper, we propose a rapid fluid animation method incorporating enhanced particle dynamics simulation method and pre-integrated volume rendering technique. The particle dynamics simulation of fluid flow was conducted in real-time using Lennard-Jones model, and the computation efficiency was enhanced such that a small number of particles can represent a significant volume. For real-time rendering, pre-integrated volume rendering method was used so that fewer slices than ever can construct seamless inter-laminar shades. The proposed method could successfully simulate and render the fluid motion in real time at an acceptable speed and visual quality.

A Study on the Applicability of Deep Learning Algorithm for Detection and Resolving of Occlusion Area (영상 폐색영역 검출 및 해결을 위한 딥러닝 알고리즘 적용 가능성 연구)

  • Bae, Kyoung-Ho;Park, Hong-Gi
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.11
    • /
    • pp.305-313
    • /
    • 2019
  • Recently, spatial information is being constructed actively based on the images obtained by drones. Because occlusion areas occur due to buildings as well as many obstacles, such as trees, pedestrians, and banners in the urban areas, an efficient way to resolve the problem is necessary. Instead of the traditional way, which replaces the occlusion area with other images obtained at different positions, various models based on deep learning were examined and compared. A comparison of a type of feature descriptor, HOG, to the machine learning-based SVM, deep learning-based DNN, CNN, and RNN showed that the CNN is used broadly to detect and classify objects. Until now, many studies have focused on the development and application of models so that it is impossible to select an optimal model. On the other hand, the upgrade of a deep learning-based detection and classification technique is expected because many researchers have attempted to upgrade the accuracy of the model as well as reduce the computation time. In that case, the procedures for generating spatial information will be changed to detect the occlusion area and replace it with simulated images automatically, and the efficiency of time, cost, and workforce will also be improved.

Efficient and Privacy-Preserving Near-Duplicate Detection in Cloud Computing (클라우드 환경에서 검색 효율성 개선과 프라이버시를 보장하는 유사 중복 검출 기법)

  • Hahn, Changhee;Shin, Hyung June;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.44 no.10
    • /
    • pp.1112-1123
    • /
    • 2017
  • As content providers further offload content-centric services to the cloud, data retrieval over the cloud typically results in many redundant items because there is a prevalent near-duplication of content on the Internet. Simply fetching all data from the cloud severely degrades efficiency in terms of resource utilization and bandwidth, and data can be encrypted by multiple content providers under different keys to preserve privacy. Thus, locating near-duplicate data in a privacy-preserving way is highly dependent on the ability to deduplicate redundant search results and returns best matches without decrypting data. To this end, we propose an efficient near-duplicate detection scheme for encrypted data in the cloud. Our scheme has the following benefits. First, a single query is enough to locate near-duplicate data even if they are encrypted under different keys of multiple content providers. Second, storage, computation and communication costs are alleviated compared to existing schemes, while achieving the same level of search accuracy. Third, scalability is significantly improved as a result of a novel and efficient two-round detection to locate near-duplicate candidates over large quantities of data in the cloud. An experimental analysis with real-world data demonstrates the applicability of the proposed scheme to a practical cloud system. Last, the proposed scheme is an average of 70.6% faster than an existing scheme.

Comparison of Message Passing Interface and Hybrid Programming Models to Solve Pressure Equation in Distributed Memory System (분산 메모리 시스템에서 압력방정식의 해법을 위한 MPI와 Hybrid 병렬 기법의 비교)

  • Jeon, Byoung Jin;Choi, Hyoung Gwon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.39 no.2
    • /
    • pp.191-197
    • /
    • 2015
  • The message passing interface (MPI) and hybrid programming models for the parallel computation of a pressure equation were compared in a distributed memory system. Both models were based on domain decomposition, and two numbers of the sub-domain were selected by considering the efficiency of the hybrid model. The parallel performances for various problem sizes were measured using up to 96 threads. It was found that in addition to the cache-memory size, the overhead of the MPI communication/OpenMP directives affected the parallel performance. For small problems, the parallel performance was low because the percentage of the overhead of the MPI communication/OpenMP directives increased as the number of threads increased, and MPI was better than the hybrid model because it had a smaller communication overhead. For large problems, the parallel performance was high because, in addition to the cache effect, the percentage of the communication overhead was relatively low compared to that for small problems, and the hybrid model was better than MPI because the communication overhead of MPI was more dominant than that of the OpenMP directives in the hybrid model.

Prestack Depth Migration for Gas Hydrate Seismic Data of the East Sea (동해 가스 하이드레이트 탄성파자료의 중합전 심도 구조보정)

  • Jang, Seong-Hyung;Suh, Sang-Yong;Go, Gin-Seok
    • Economic and Environmental Geology
    • /
    • v.39 no.6 s.181
    • /
    • pp.711-717
    • /
    • 2006
  • In order to study gas hydrate, potential future energy resources, Korea Institute of Geoscience and Mineral Resources has conducted seismic reflection survey in the East Sea since 1997. one of evidence for presence of gas hydrate in seismic reflection data is a bottom simulating reflector (BSR). The BSR occurs at the interface between overlaying higher velocity, hydrate-bearing sediment and underlying lower velocity, free gas-bearing sediment. That is often characterized by large reflection coefficient and reflection polarity reverse to that of seafloor reflection. In order to apply depth migration to seismic reflection data. we need high performance computers and a parallelizing technique because of huge data volume and computation. Phase shift plus interpolation (PSPI) is a useful method for migration due to less computing time and computational efficiency. PSPI is intrinsically parallelizing characteristic in the frequency domain. We conducted conventional data processing for the gas hydrate data of the Ease Sea and then applied prestack depth migration using message-passing-interface PSPI (MPI_PSPI) that was parallelized by MPI local-area-multi-computer (MPI_LAM). Velocity model was made using the stack velocities after we had picked horizons on the stack image with in-house processing tool, Geobit. We could find the BSRs on the migrated stack section were about at SP 3555-4162 and two way travel time around 2,950 ms in time domain. In depth domain such BSRs appear at 6-17 km distance and 2.1 km depth from the seafloor. Since energy concentrated subsurface was well imaged we have to choose acquisition parameters suited for transmitting seismic energy to target area.

Performance Analysis of STBC System Combined with Convolution Code fot Improvement of Transmission Reliability (전송신뢰성의 향상을 위해 STBC에 컨볼루션 코드를 연계한 시스템의 성능분석)

  • Shin, Hyun-Jun;Kang, Chul-Gyu;Oh, Chang-Heon
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.6
    • /
    • pp.1068-1074
    • /
    • 2011
  • In this paper, the proposed scheme is STBC(space-time block codes) system combined with convolution code which is the most popular channel coding to ensure the reliability of data transmission for a high data rate wireless communication. The STBC is one of MIMO(multi-input multi-output) techniques. In addition, this scheme uses a modified viterbi algorithm in order to get a high system gain when data is transmitted. Because we combine STBC and convolution code, the proposed scheme has a little high quantity of computation but it can get a maximal diversity gain of STBC and a high coding gain of convolution code at the same time. Unlike existing viterbi docoding algorithm using Hamming distance in order to calculate branch matrix, the modified viterbi algorithm uses Euclidean distance value between received symbol and reference symbol. Simulation results show that the modified viterbi algorithm improved gain 7.5 dB on STBC 2Tx-2Rx at $BER=10^{-2}$. Therefore the proposed scheme using STBC combined with convolution code can improve the transmission reliability and transmission efficiency.

Fast Motion Estimation for Variable Motion Block Size in H.264 Standard (H.264 표준의 가변 움직임 블록을 위한 고속 움직임 탐색 기법)

  • 최웅일;전병우
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.209-220
    • /
    • 2004
  • The main feature of H.264 standard against conventional video standards is the high coding efficiency and the network friendliness. In spite of these outstanding features, it is not easy to implement H.264 codec as a real-time system due to its high requirement of memory bandwidth and intensive computation. Although the variable block size motion compensation using multiple reference frames is one of the key coding tools to bring about its main performance gain, it demands substantial computational complexity due to SAD (Sum of Absolute Difference) calculation among all possible combinations of coding modes to find the best motion vector. For speedup of motion estimation process, therefore, this paper proposes fast algorithms for both integer-pel and fractional-pel motion search. Since many conventional fast integer-pel motion estimation algorithms are not suitable for H.264 having variable motion block sizes, we propose the motion field adaptive search using the hierarchical block structure based on the diamond search applicable to variable motion block sizes. Besides, we also propose fast fractional-pel motion search using small diamond search centered by predictive motion vector based on statistical characteristic of motion vector.