• Title/Summary/Keyword: Computational Approaches

Search Result 702, Processing Time 0.031 seconds

Analysis of GPU Performance and Memory Efficiency according to Task Processing Units (작업 처리 단위 변화에 따른 GPU 성능과 메모리 접근 시간의 관계 분석)

  • Son, Dong Oh;Sim, Gyu Yeon;Kim, Cheol Hong
    • Smart Media Journal
    • /
    • v.4 no.4
    • /
    • pp.56-63
    • /
    • 2015
  • Modern GPU can execute mass parallel computation by exploiting many GPU core. GPGPU architecture, which is one of approaches exploiting outstanding computational resources on GPU, executes general-purpose applications as well as graphics applications, effectively. In this paper, we investigate the impact of memory-efficiency and performance according to number of CTAs(Cooperative Thread Array) on a SM(Streaming Multiprocessors), since the analysis of relation between number of CTA on a SM and them provides inspiration for researchers who study the GPU to improve the performance. Our simulation results show that almost benchmarks increasing the number of CTAs on a SM improve the performance. On the other hand, some benchmarks cannot provide performance improvement. This is because the number of CTAs generated from same kernel is a little or the number of CTAs executed simultaneously is not enough. To precisely classify the analysis of performance according to number of CTA on a SM, we also analyze the relations between performance and memory stall, dram stall due to the interconnect congestion, pipeline stall at the memory stage. We expect that our analysis results help the study to improve the parallelism and memory-efficiency on GPGPU architecture.

Rethinking of the Uncertainty: A Fault-Tolerant Target-Tracking Strategy Based on Unreliable Sensing in Wireless Sensor Networks

  • Xie, Yi;Tang, Guoming;Wang, Daifei;Xiao, Weidong;Tang, Daquan;Tang, Jiuyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.6
    • /
    • pp.1496-1521
    • /
    • 2012
  • Uncertainty is ubiquitous in target tracking wireless sensor networks due to environmental noise, randomness of target mobility and other factors. Sensing results are always unreliable. This paper considers unreliability as it occurs in wireless sensor networks and its impact on target-tracking accuracy. Firstly, we map intersection pairwise sensors' uncertain boundaries, which divides the monitor area into faces. Each face has a unique signature vector. For each target localization, a sampling vector is built after multiple grouping samplings determine whether the RSS (Received Signal Strength) for a pairwise nodes' is ordinal or flipped. A Fault-Tolerant Target-Tracking (FTTT) strategy is proposed, which transforms the tracking problem into a vector matching process that increases the tracking flexibility and accuracy while reducing the influence of in-the-filed factors. In addition, a heuristic matching algorithm is introduced to reduce the computational complexity. The fault tolerance of FTTT is also discussed. An extension of FTTT is then proposed by quantifying the pairwise uncertainty to further enhance robustness. Results show FTTT is more flexible, more robust and more accurate than parallel approaches.

Hybrid copy-move-forgery detection algorithm fusing keypoint-based and block-based approaches (특징점 기반 방식과 블록 기반 방식을 융합한 효율적인 CMF 위조 검출 방법)

  • Park, Chun-Su
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.7-13
    • /
    • 2018
  • The methods for detecting copy move frogery (CMF) are divided into two categories, block-based methods and keypoint-based methods. Block-based methods have a high computational cost because a large number of blocks should be examined for CMF detection. In addition, the forgery detection may fail if a tampered region undergoes geometric transformation. On the contrary, keypoint-based methods can overcome the disadvantages of the block-based approach, but it can not detect a tampered region if the CMF forgery occurs in the low entropy region of the image. Therefore, in this paper, we propose a method to detect CMF forgery in all areas of image by combining keypoint-based and block-based methods. The proposed method first performs keypoint-based CMF detection on the entire image. Then, the areas for which the forgery check is not performed are selected and the block-based CMF detection is performed for them. Therefore, the proposed CMF detection method makes it possible to detect CMF forgery occurring in all areas of the image. Experimental results show that the proposed method achieves better forgery detection performance than conventional methods.

Structural Analysis of Two-dimensional Continuum by Finite Element Method (유한요소법에 의한 이차원연속체의 구조해석)

  • 이재영;고재군
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.22 no.2
    • /
    • pp.83-100
    • /
    • 1980
  • This study was intended to computerize the structural analysis of two-dimensional continuum by finite element method, and to provide a preparatory basis for more sophisticated and more generalized computer programs of this kind. A computer program, applicable to any shape of two-dimensional continuum, was formulated on the basis of 16-degree-of- freedom rectangular element. Various computational aspects pertaining to the implementation of finite element method were reviewed and settled in the course of programming. The validity of the program was checked through several case studies. To assess the accuracy and the convergence characteristics of the method, the results computed by the program were compared with solutions by other methods, namely the analytical Navier's method and the framework method. Through actual programming and analysis of the computed results, the following facts were recognized; 1) The stiffness matrix should necessarily be assembled in a condensed form in order to make it possible to discretize the continuum into practically adequate number of elements without using back-up storage. 2) For minimization of solution time, in-core solution of the equilibrium equation is essential. LDLT decomposition is recommended for stiffness matrices condensed by the compacted column storage scheme. 3) As for rectangular plates, the finite element method shows better performances both in the accuracy and in the rate of convergence than the framework method. As the number of elements increases, the error of the finite element method approaches around 1%. 4) Regardless of the structural shape, there is a uniform tendency in convergence characteristics dependent on the shape of element. Square elements show the best performance. 5) The accuracy of computation is independent of the interpolation function selected.

  • PDF

Influence of end fixity on post-yield behaviors of a tubular member

  • Cho, Kyu Nam
    • Structural Engineering and Mechanics
    • /
    • v.13 no.5
    • /
    • pp.557-568
    • /
    • 2002
  • For the evaluation of the capability of a tubular member of an offshore structure to absorb the collision energy, a simple method can be employed for the collision analysis without performing the detailed analysis. The most common simple method is the rigid-plastic method. However, in this method any characteristics for horizontal movement and rotation at the ends of the corresponding tubular member are not included. In a real structural system of an offshore structure, tubular members sustain a certain degree of elastic support from the adjacent structure. End fixity has influences in the behaviors of a tubular member. Three-dimensional FEM analysis can include the effect of end fixity fully, however in viewpoints of the inherent computational complexities of the 3-D approach, this is not the recommendable analysis at the initial design stage. In this paper, influence of end fixity on the behaviors of a tubular member is investigated, through a new approach and other approaches. A new analysis approach that includes the flexibility of the boundary points of the member is developed here. The flexibility at the ends of a tubular element is extracted using the rational reduction of the modeling characteristics. The property reduction is based on the static condensation of the related global stiffness matrix of a model to end nodal points of the tubular element. The load-displacement relation at the collision point of the tubular member with and without the end flexibility is obtained and compared. The new method lies between the rigid-plastic method and the 3-demensional analysis. It is self-evident that the rigid-plastic method gives high strengthening membrane effect of the member during global deformation, resulting in a steeper slope than the present method. On the while, full 3-D analysis gives less strengthening membrane effect on the member, resulting in a slow going load-displacement curve. Comparison of the load-displacement curves by the new approach with those by conventional methods gives the figures of the influence of end fixity on post-yielding behaviors of the relevant tubular member. One of the main contributions of this investigation is the development of an analytical rational procedure to figure out the post-yielding behaviors of a tubular member in offshore structures.

Analysis of toxicity using bio-digital contents (바이오 디지털 콘텐츠를 이용한 독성의 분석)

  • Kang, Jin-Seok
    • Journal of Digital Contents Society
    • /
    • v.11 no.1
    • /
    • pp.99-104
    • /
    • 2010
  • Numerous bio-digital contents have been produced by new technology using biochip and others for analyzing early chemical-induced genes. These contents have little meaning by themselves, and so they should be modified and extracted after consideration of biological meaning. These include genomics, transcriptomics, protenomics, metabolomics, which combined into omics. Omics tools could be applied into toxicology, forming a new field of toxicogenomics. It is possible that approach of toxicogenomics can estimate toxicity more quickly and accurately by analyzing gene/protein/metabolite profiles. These approaches should help not only to discover highly sensitive and predictive biomarkers but also to understand molecular mechanism(s) of toxicity, based on the development of analysing technology. Furthermore, it is important that bio-digital contents should be obtained from specific cells having biological events more than from whole cells. Taken together, many bio-digital contents should be analyzed by careful calculating algorism under well-designed experimental protocols, network analysis using computational algorism and related profound databases.

Bayesian methods in clinical trials with applications to medical devices

  • Campbell, Gregory
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.561-581
    • /
    • 2017
  • Bayesian statistics can play a key role in the design and analysis of clinical trials and this has been demonstrated for medical device trials. By 1995 Bayesian statistics had been well developed and the revolution in computing powers and Markov chain Monte Carlo development made calculation of posterior distributions within computational reach. The Food and Drug Administration (FDA) initiative of Bayesian statistics in medical device clinical trials, which began almost 20 years ago, is reviewed in detail along with some of the key decisions that were made along the way. Both Bayesian hierarchical modeling using data from previous studies and Bayesian adaptive designs, usually with a non-informative prior, are discussed. The leveraging of prior study data has been accomplished through Bayesian hierarchical modeling. An enormous advantage of Bayesian adaptive designs is achieved when it is accompanied by modeling of the primary endpoint to produce the predictive posterior distribution. Simulations are crucial to providing the operating characteristics of the Bayesian design, especially for a complex adaptive design. The 2010 FDA Bayesian guidance for medical device trials addressed both approaches as well as exchangeability, Type I error, and sample size. Treatment response adaptive randomization using the famous extracorporeal membrane oxygenation example is discussed. An interesting real example of a Bayesian analysis using a failed trial with an interesting subgroup as prior information is presented. The implications of the likelihood principle are considered. A recent exciting area using Bayesian hierarchical modeling has been the pediatric extrapolation using adult data in clinical trials. Historical control information from previous trials is an underused area that lends itself easily to Bayesian methods. The future including recent trends, decision theoretic trials, Bayesian benefit-risk, virtual patients, and the appalling lack of penetration of Bayesian clinical trials in the medical literature are discussed.

A High-Speed Korean Morphological Analysis Method based on Pre-Analyzed Partial Words (부분 어절의 기분석에 기반한 고속 한국어 형태소 분석 방법)

  • Yang, Seung-Hyun;Kim, Young-Sum
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.3
    • /
    • pp.290-301
    • /
    • 2000
  • Most morphological analysis methods require repetitive procedures of input character code conversion, segmentation and lemmatization of constituent morphemes, filtering of candidate results through looking up lexicons, which causes run-time inefficiency. To alleviate such problem of run-time inefficiency, many systems have introduced the notion of 'pre-analysis' of words. However, this method based on pre-analysis dictionary of surface also has a critical drawback in its practical application because the size of the dictionaries increases indefinite to cover all words. This paper hybridizes both extreme approaches methodologically to overcome the problems of the two, and presents a method of morphological analysis based on pre-analysis of partial words. Under such hybridized scheme, most computational overheads, such as segmentation and lemmatization of morphemes, are shifted to building-up processes of the pre-analysis dictionaries and the run-time dictionary look-ups are greatly reduced, so as to enhance the run-time performance of the system. Moreover, additional computing overheads such as input character code conversion can also be avoided because this method relies upon no graphemic processing.

  • PDF

Effects of SW Training using Robot Based on Card Coding on Learning Motivation and Attitude (카드 코딩 기반의 로봇을 활용한 SW 교육이 학습동기 및 태도에 미치는 영향)

  • Jun, SooJin
    • Journal of The Korean Association of Information Education
    • /
    • v.22 no.4
    • /
    • pp.447-455
    • /
    • 2018
  • The purpose of this study is to investigate the effects of SW education using robot based on card coding on learning motivation and attitude of elementary school students. To do this, we conducted 8-hour SW education based on the CT concept of sequence, repetition, event, and control using the Truetrue, which is coded by command card for the 3rd grade of elementary school students. For the experiment, we examined the learning motivation for SW education and the attitude toward SW education based on the robot in advance. As a result, the students' motivation to learn SW education showed a statistically significant improvement. In addition, the attitude toward robot-based SW education improved statistically significantly as "good, convenient, interesting, easy, friendly, active, special, understandable, easy, simple". These results are expected to contribute to the expansion of education through various approaches of SW education.

Optimizing dispatching strategy based on multicriteria heuristics for AGVs in automated container terminal (자동화 컨테이너 터미널의 복수 규칙 기반 AGV 배차 전략 최적화)

  • Kim, Jeong-Min;Choe, Ri;Park, Tae-Jin;Ryu, Kwang-Ryul
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2011.06a
    • /
    • pp.218-219
    • /
    • 2011
  • This paper focuses on dispatching strategy for AGVs(Automated Guided Vehicle). The goal of AGV dispatching problem is allocating jobs to AGVs to minimizing QC delay and AGV total travel distance. Due to the highly dynamic nature of container terminal environment, the effect of dispatching is hard to predict thus it leads to frequent modification of dispatching results. Given this situation, single rule-based approach is widely used due to its simplicity and small computational cost. However, single rule-based approach has a limitation that cannot guarantee a satisfactory performance for the various performance measures. In this paper, dispatching strategy based on multicriteria heuristics is proposed. Proposed strategy consists of multiple decision criteria. A muti-objective evolutionary algorithm is applied to optimize weights of those criteria. The result of simulation experiment shows that the proposed approach outperforms single rule-based dispatching approaches.

  • PDF