• Title/Summary/Keyword: Algorithm partition

Search Result 360, Processing Time 0.036 seconds

Analysis of Saccharomyces Cell Cycle Expression Data using Bayesian Validation of Fuzzy Clustering (퍼지 클러스터링의 베이지안 검증 방법을 이용한 발아효모 세포주기 발현 데이타의 분석)

  • Yoo Si-Ho;Won Hong-Hee;Cho Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.12
    • /
    • pp.1591-1601
    • /
    • 2004
  • Clustering, a technique for the analysis of the genes, organizes the patterns into groups by the similarity of the dataset and has been used for identifying the functions of the genes in the cluster or analyzing the functions of unknown gones. Since the genes usually belong to multiple functional families, fuzzy clustering methods are more appropriate than the conventional hard clustering methods which assign a sample to a group. In this paper, a Bayesian validation method is proposed to evaluate the fuzzy partitions effectively. Bayesian validation method is a probability-based approach, selecting a fuzzy partition with the largest posterior probability given the dataset. At first, the proposed Bayesian validation method is compared to the 4 representative conventional fuzzy cluster validity measures in 4 well-known datasets where foray c-means algorithm is used. Then, we have analyzed the results of Saccharomyces cell cycle expression data evaluated by the proposed method.

Design and Implementation of a Concuuuency Control Manager for Main Memory Databases (주기억장치 데이터베이스를 위한 동시성 제어 관리자의 설계 및 구현)

  • Kim, Sang-Wook;Jang, Yeon-Jeong;Kim, Yun-Ho;Kim, Jin-Ho;Lee, Seung-Sun;Choi, Wan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.4B
    • /
    • pp.646-680
    • /
    • 2000
  • In this paper, we discuss the design and implementation of a concurrency control manager for a main memory DBMS(MMDBMS). Since an MMDBMS, unlike a disk-based DBMS, performs all of data update or retrieval operations by accessing main memory only, the portion of the cost for concurrency control in the total cost for a data update or retrieval is fairly high. Thus, the development of an efficient concurrency control manager highly accelerates the performance of the entire system. Our concurrency control manager employs the 2-phase locking protocol, and has the following characteristics. First, it adapts the partition, an allocation unit of main memory, as a locking granule, and thus, effectively adjusts the trade-off between the system concurrency and locking cost through the analysis of applications. Second, it enjoys low locking costs by maintaining the lock information directly in the partition itself. Third, it provides the latch as a mechanism for physical consistency of system data. Our latch supports both of the shared and exclusive modes, and maximizes the CPU utilization by combining the Bakery algorithm and Unix semaphore facility. Fourth, for solving the deadlock problem, it periodically examines whether a system is in a deadlock state using lock waiting information. In addition, we discuss various issues arising in development such as mutual exclusion of a transaction table, mutual exclusion of indexes and system catalogs, and realtime application supports.

  • PDF

Experimental Study on Source Locating Technique for Transversely Isotropic Media (횡등방성 매질의 음원추적기법에 대한 실험적 연구)

  • Choi, Seung-Beum;Jeon, Seokwon
    • Tunnel and Underground Space
    • /
    • v.25 no.1
    • /
    • pp.56-67
    • /
    • 2015
  • In this study, a source locating technique applicable to transversely isotropic media was developed. Wave velocity anisotropy was considered based on the partition approximation method, which simply enabled AE source locating. Sets of P wave arrival time were decided by the two-step AIC algorithm and they were later used to locate the AE sources when having the least error compared with the partitioned elements. In order to validate the technique, pencil lead break test on artificial transversely isotropic mortar specimen was carried out. Defining the absolute error as the distance between the pencil lead break point and the located point, 1.60 mm ~ 14.46 mm of range and 8.57 mm of average were estimated therefore it was regarded as thought to be 'acceptable' considering the size of the specimen and the AE sensors. Comparing each absolute error under different threshold levels, results showed small discrepancies therefore this technique was hardly affected by background noise. Absolute error could be decomposed into each coordinate axis error and through it, effect of AE sensor position could be understood so if optimum sensor position was able to be decided, one could get more precise outcome.

Dynamic Block Reassignment for Load Balancing of Block Centric Graph Processing Systems (블록 중심 그래프 처리 시스템의 부하 분산을 위한 동적 블록 재배치 기법)

  • Kim, Yewon;Bae, Minho;Oh, Sangyoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.5
    • /
    • pp.177-188
    • /
    • 2018
  • The scale of graph data has been increased rapidly because of the growth of mobile Internet applications and the proliferation of social network services. This brings upon the imminent necessity of efficient distributed and parallel graph processing approach since the size of these large-scale graphs are easily over a capacity of a single machine. Currently, there are two popular parallel graph processing approaches, vertex-centric graph processing and block centric processing. While a vertex-centric graph processing approach can easily be applied to the parallel processing system, a block-centric graph processing approach is proposed to compensate the drawbacks of the vertex-centric approach. In these systems, the initial quality of graph partition affects to the overall performance significantly. However, it is a very difficult problem to divide the graph into optimal states at the initial phase. Thus, several dynamic load balancing techniques have been studied that suggest the progressive partitioning during the graph processing time. In this paper, we present a load balancing algorithms for the block-centric graph processing approach where most of dynamic load balancing techniques are focused on vertex-centric systems. Our proposed algorithm focus on an improvement of the graph partition quality by dynamically reassigning blocks in runtime, and suggests block split strategy for escaping local optimum solution.

Conceptual eco-hydrological model reflecting the interaction of climate-soil-vegetation-groundwater table in humid regions (습윤 지역의 기후-토양-식생-지하수위 상호작용을 반영한 개념적인 생태 수문 모형)

  • Choi, Jeonghyeon;Kim, Sangdan
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.9
    • /
    • pp.681-692
    • /
    • 2021
  • Vegetation processes have a significant impact on rainfall runoff processes through evapotranspiration control, but are rarely considered in the conceptual lumped hydrological model. This study evaluated the model performance of the Hapcheon Dam watershed by integrating the ecological module expressing the leaf area index data sensed remotely from the satellite into the hydrological partition module. The proposed eco-hydrological model has three main features to better represent the eco-hydrological process in humid regions. 1) The growth rate of vegetation is constrained by water shortage stress in the watershed. 2) The maximum growth of vegetation is limited by the energy of the watershed climate. 3) The interaction of vegetation and aquifers is reflected. The proposed model simultaneously simulates hydrologic components and vegetation dynamics of watershed scale. The following findings were found from the validation results using the model parameters estimated by the SCEM algorithm. 1) Estimating the parameters of the eco-hydrological model using the leaf area index and streamflow data can predict the streamflow with similar accuracy and robustness to the hydrological model without the ecological module. 2) Using the remotely sensed leaf area index without filtering as input data is not helpful in estimating streamflow. 3) The integrated eco-hydrological model can provide an excellent estimate of the seasonal variability of the leaf area index.

Implementation of Parallel Local Alignment Method for DNA Sequence using Apache Spark (Apache Spark을 이용한 병렬 DNA 시퀀스 지역 정렬 기법 구현)

  • Kim, Bosung;Kim, Jinsu;Choi, Dojin;Kim, Sangsoo;Song, Seokil
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.608-616
    • /
    • 2016
  • The Smith-Watrman (SW) algorithm is a local alignment algorithm which is one of important operations in DNA sequence analysis. The SW algorithm finds the optimal local alignment with respect to the scoring system being used, but it has a problem to demand long execution time. To solve the problem of SW, some methods to perform SW in distributed and parallel manner have been proposed. The ADAM which is a distributed and parallel processing framework for DNA sequence has parallel SW. However, the parallel SW of the ADAM does not consider that the SW is a dynamic programming method, so the parallel SW of the ADAM has the limit of its performance. In this paper, we propose a method to enhance the parallel SW of ADAM. The proposed parallel SW (PSW) is performed in two phases. In the first phase, the PSW splits a DNA sequence into the number of partitions and assigns them to multiple nodes. Then, the original Smith-Waterman algorithm is performed in parallel at each node. In the second phase, the PSW estimates the portion of data sequence that should be recalculated, and the recalculation is performed on the portions in parallel at each node. In the experiment, we compare the proposed PSW to the parallel SW of the ADAM to show the superiority of the PSW.

Improvement of Model based on Inherent Optical Properties for Remote Sensing of Cyanobacterial Bloom (고유분광특성을 이용한 남조류 원격 추정 모델 개선)

  • Ha, Rim;Nam, Gibeom;Park, Sanghyun;Kang, Taegu;Shin, Hyunjoo;Kim, Kyunghyun;Rhew, Doughee;Lee, Hyuk
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.2
    • /
    • pp.111-123
    • /
    • 2017
  • The phycocyanin pigment (PC) is a marker for cyanobacterial presence in eutrophic inland water. Accurate estimation of low PC concentration in turbid inland water is challenging due to the optical complexity and criticalforissuing an early warning of potentialrisks of cyanobacterial bloom to the public. To monitor cyanobacterial bloom in eutrophic inland waters, an approach is proposed to partition non-water absorption coefficient from measured reflectance and to retrieve absorption coefficient of PC with the aim of improving the accuracy in remotely estimated PC, in particular for low concentrations. The proposed inversion model retrieves absorption spectra of PC ($a_{pc}({\lambda})$) with $R^2{\geq}0.8$ for $a_{pc}(620)$. The algorithm achieved more accurate Chl-a and PC estimation with $0.71{\leq}R^2{\leq}0.85$, relative root mean square error (rRMSE) ${\leq}39.4%$ and mean relative error(RE) ${\leq}78.0%$ than the widely used semi-empirical algorithm for the same dataset. In particular, low PC ($PC{\leq}50mg/m^3$) and low PC: Chl-a ratio values of for all datasets used in this study were well predicted by the proposed algorithm.

Characteristics of Gas Furnace Process by Means of Partition of Input Spaces in Trapezoid-type Function (사다리꼴형 함수의 입력 공간분할에 의한 가스로공정의 특성분석)

  • Lee, Dong-Yoon
    • Journal of Digital Convergence
    • /
    • v.12 no.4
    • /
    • pp.277-283
    • /
    • 2014
  • Fuzzy modeling is generally using the given data and the fuzzy rules are established by the input variables and the space division by selecting the input variable and dividing the input space for each input variables. The premise part of the fuzzy rule is presented by selection of the input variables, the number of space division and membership functions and in this paper the consequent part of the fuzzy rule is identified by polynomial functions in the form of linear inference and modified quadratic. Parameter identification in the premise part devides input space Min-Max method using the minimum and maximum values of input data set and C-Means clustering algorithm forming input data into the hard clusters. The identification of the consequence parameters, namely polynomial coefficients, of each rule are carried out by the standard least square method. In this paper, membership function of the premise part is dividing input space by using trapezoid-type membership function and by using gas furnace process which is widely used in nonlinear process we evaluate the performance.

Geometric Correction of Lips Using Lip Information (입술정보를 이용한 입술모양의 기하학적 보정)

  • 황동국;박희정;전병민
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.834-841
    • /
    • 2004
  • There can be lips transformed geometrically in the lip images according to the location or the pose of camera and speaker. This transformation of the lip images changes geometric information of original lip phases. Therefore, for enhancing global lip information by using partial information of lips to correct lip phases transformed geometrically, in this paper we propose a method that can geometrically correct lips. The method is composed of two steps - the feature-deciding step and the correcting step. In the former, it is for us to extract key points and features of source image according to the its lip model and to create that of target image according to the its lip model. In the latter, we decide mapping relation after partition a source and target image based on information extracted in the previous step into each 4 regions. and then, after mapping, we unite corrected sub-images to a result image. As experiment image, we use fames that contain pronunciation on short vowels of the Korean language and use lip symmetry for evaluating the proposed algorithm. In experiment result, the correcting rate of the lower lip than the upper lip and that of lips moving largely than little was highly enhanced.

A Study on Genetically Optimized Fuzzy Set-based Polynomial Neural Networks (진화이론을 이용한 최적화 Fuzzy Set-based Polynomial Neural Networks에 관한 연구)

  • Rho, Seok-Beom;Oh, Sung-Kwun
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.346-348
    • /
    • 2004
  • In this rarer, we introduce a new Fuzzy Polynomial Neural Networks (FPNNs)-like structure whose neuron is based on the Fuzzy Set-based Fuzzy Inference System (FS-FIS) and is different from that of FPNNs based on the Fuzzy relation-based Fuzzy Inference System (FR-FIS) and discuss the ability of the new FPNNs-like structurenamed Fuzzy Set-based Polynomial Neural Networks (FSPNN). The premise parts of their fuzzy rules are not identical, while the consequent parts of the both Networks (such as FPNN and FSPNN) are identical. This difference results from the angle of a viewpoint of partition of input space of system. In other word, from a point of view of FS-FIS, the input variables are mutually independent under input space of system, while from a viewpoint of FR-FIS they are related each other. In considering the structures of FPNN-like networks such as FPNN and FSPNN, they are almost similar. Therefore they have the same shortcomings as well as the same virtues on structural side. The proposed design procedure for networks' architecture involves the selection of appropriate nodes with specific local characteristics such as the number of input variables, the order of the polynomial that is constant, linear, quadratic, or modified quadratic functions being viewed as the consequent part of fuzzy rules, and a collection of the specific subset of input variables. On the parameter optimization phase, we adopt Information Granulation (IG) based on HCM clustering algorithm and a standard least square method-based learning. Through the consecutive process of such structural and parametric optimization, an optimized and flexible fuzzy neural network is generated in a dynamic fashion. To evaluate the performance of the genetically optimized FSPNN (gFSPNN), the model is experimented with using gas furnace process dataset.

  • PDF