• Title/Summary/Keyword: extensive data analysis

Search Result 603, Processing Time 0.026 seconds

A Study on the Factors and Measurement of Quality of System Integration Service (정보시스템 통합 서비스의 품질요인 및 측정에 관한 연구)

  • 서창적
    • Journal of Korean Society for Quality Management
    • /
    • v.27 no.4
    • /
    • pp.20-41
    • /
    • 1999
  • This study addresses the development of a quality measurement of information systems integration(SI) service. Several dimensions which affect on quality of systems integration service have been identified and tested. Also, a measurement tool(questionnaire) of the factors has been developed. To achieve above purpose, extensive literature review and in-depth interview with several SI managers and customers were used. We suggested the analysis framework including performance variables such as quality, customer satisfaction, intention of renewal contract, and contribution to better customer's information system and the quality factors as well. To verify the research framework, collected data from the survey was analyzed statistically. The data from 73 respondents was used for analysis. Consequently, we identified eight factors and developed a 41-item instrument with Likert 5 points to measure the quality of SI service. It was proved that the 41-item instrument suggested in this study was very useful to measure the performance of SI service such as quality and customer satisfaction. Also it was shown that the instrument measured intention of renewal contract and contribution of customer's information system well.

  • PDF

Scaling Reuse Detection in the Web through Two-way Boosting with Signatures and LSH

  • Kim, Jong Wook
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.735-745
    • /
    • 2013
  • The emergence of Web 2.0 technologies, such as blogs and wiki, enable even naive users to easily create and share content on the Web using freely available content sharing tools. Wide availability of almost free data and promiscuous sharing of content through social networking platforms created a content borrowing phenomenon, where the same content appears (in many cases in the form of extensive quotations) in different outlets. An immediate side effect of this phenomenon is that identifying which content is re-used by whom is becoming a critical tool in social network analysis, including expert identification and analysis of information flow. Internet-scale reuse detection, however, poses extremely challenging scalability issues: considering the large size of user created data on the web, it is essential that the techniques developed for content-reuse detection should be fast and scalable. Thus, in this paper, we propose a $qSign_{lsh}$ algorithm, a mechanism for identifying multi-sentence content reuse among documents by efficiently combining sentence-level evidences. The experiment results show that $qSign_{lsh}$ significantly improves the reuse detection speed and provides high recall.

Statistical variations in the impact resistance and mechanical properties of polypropylene fiber reinforced self-compacting concrete

  • Mastali, M.;Dalvand, A.;Fakharifar, M.
    • Computers and Concrete
    • /
    • v.18 no.1
    • /
    • pp.113-137
    • /
    • 2016
  • Extensive experimental studies on remarkable mechanical properties Polypropylene Fibre Reinforced Self-compacting Concrete (PFRSCC) have been executed, including different fibre volume fractions of Polypropylene fibers (0.25%, 0.5%, 0.75%, and 1%) and different water to cement ratios (0.21, 0.34, 0.38, and 0.41). The experimental program was carried out by using two hundred and sixteen specimens to obtain the impact resistance and mechanical properties of PFRSCC materials, considering compressive strength, splitting tensile strength, and flexural strength. Statistical and analytical studies have been mainly focused on experimental data to correlate of mechanical properties of PFRSCC materials. Statistical results revealed that compressive, splitting tensile, and flexural strengths as well as impact resistance follow the normal distribution. Moreover, to correlate mechanical properties based on acquired test results, linear and nonlinear equations were developed among mechanical properties and impact resistance of PFRSCC materials.

Three Sterol Sulfates Isolated from a Marine Sponge Acanthodoryx Fibrosa

  • Park, Su-Young;Hwang, Byung-Su;Ji, Kwang-Hee;Rho, Jung-Rae
    • Journal of the Korean Magnetic Resonance Society
    • /
    • v.11 no.2
    • /
    • pp.122-128
    • /
    • 2007
  • Three sterol sulfates were isolated from AMPK activity-guided fraction of a marine sponge Acanthodoryx fibrosa. Their structures were determined by an extensive NMR analysis, MS data, and two compounds were confirmed as unusual phosphorylated sterol sulfates by comparing with NMR data of the known compounds. Compound 3 was given to be a new dephosphated sterol sulfate derivative. Compound 1 moderately showed AMPK activation effect on L6 myoblast cell through Western blot analysis.

  • PDF

Design and Performance Analysis of the H/V-bus Parallel Computer (H/V-버스 병렬컴퓨터의 설계 및 성능 분석)

  • 김종현
    • Journal of the Korea Society for Simulation
    • /
    • v.3 no.1
    • /
    • pp.29-42
    • /
    • 1994
  • The architecture of a MIMD-type parallel computer system is specified: a simulator is developed to support design and evaluation of systems based on the architecture: and conducted with the simulator to evaluate system performance. The horizontal/vertical-bus(H/V-bus) system architecture provides an NxN array of processing elements which communicate with each other through a network of N horizontal buses and N vertical buses. The simulator, written in SLAM II and FORTRAN, is designed to provide high-resolution in simulating the IPC mechanism. Parameters provide the user with independent control of system size, PE speed and IPC mechanism speed. Results generated by the simulator include execution times, PE utilizations, queue lengths, and other data. The simulator is used to study system performance when a partial differential equation is solved by parallel Gauss-Seidel method. For comparisons, the benchmark is also executed on a single-bus system simulator that is derived from the H/V-bus system simulator. The benchmark is also solved on a single PE to obtain data for computing speedups. An extensive analysis of results is presented.

  • PDF

GraPT: Genomic InteRpreter about Predictive Toxicology

  • Woo Jung-Hoon;Park Yu-Rang;Jung Yong;Kim Ji-Hun;Kim Ju-Han
    • Genomics & Informatics
    • /
    • v.4 no.3
    • /
    • pp.129-132
    • /
    • 2006
  • Toxicogenomics has recently emerged in the field of toxicology and the DNA microarray technique has become common strategy for predictive toxicology which studies molecular mechanism caused by exposure of chemical or environmental stress. Although microarray experiment offers extensive genomic information to the researchers, yet high dimensional characteristic of the data often makes it hard to extract meaningful result. Therefore we developed toxicant enrichment analysis similar to the common enrichment approach. We also developed web-based system graPT to enable considerable prediction of toxic endpoints of experimental chemical.

A Two Sample Test for Functional Data

  • Lee, Jong Soo;Cox, Dennis D.;Follen, Michele
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.2
    • /
    • pp.121-135
    • /
    • 2015
  • We consider testing equality of mean functions from two samples of functional data. A novel test based on the adaptive Neyman methodology applied to the Hotelling's T-squared statistic is proposed. Under the enlarged null hypothesis that the distributions of the two populations are the same, randomization methods are proposed to find a null distribution which gives accurate significance levels. An extensive simulation study is presented which shows that the proposed test works very well in comparison with several other methods under a variety of alternatives and is one of the best methods for all alternatives, whereas the other methods all show weak power at some alternatives. An application to a real-world data set demonstrates the applicability of the method.

Modeling and Verification of Eco-Driving Evaluation

  • Lin Liu;Nenglong Hu;Zhihu Peng;Shuxian Zhan;Jingting Gao;Hong Wang
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.296-306
    • /
    • 2024
  • Traditional ecological driving (Eco-Driving) evaluations often rely on mathematical models that predominantly offer subjective insights, which limits their application in real-world scenarios. This study develops a robust, data-driven Eco-Driving evaluation model by integrating dynamic and distributed multi-source data, including vehicle performance, road conditions, and the driving environment. The model employs a combination weighting method alongside K-means clustering to facilitate a nuanced comparative analysis of Eco-Driving behaviors across vehicles with identical energy consumption profiles. Extensive data validation confirms that the proposed model is capable of assessing Eco-Driving practices across diverse vehicles, roads, and environmental conditions, thereby ensuring more objective, comprehensive, and equitable results.

Meta-analysis on risk stratification of malignant ventricular tachyarrhythmic events in arrhythmogenic right ventricular cardiomyopathy

  • Roh, Young-Eun;Jang, Hyun Ji;Cho, Min-Jung
    • Journal of Yeungnam Medical Science
    • /
    • v.34 no.2
    • /
    • pp.208-215
    • /
    • 2017
  • Background: Arrhythmogenic right ventricular cardiomyopathy (ARVC) is a cardiomyopathy characterized by predominant right ventricular fibro-fatty replacement, right ventricular dysfunction and ventricular arrhythmias. It is a rare but important cause of sudden cardiac death in children and young adults. A meta-analysis on risk stratification of major ventricular tachyarrhythmic events indicating the need for implantable cardioverter defibrillator therapy in ARVC was performed. Methods: The pubmed database was searched from its inception to May 2015. Of the 433 citations identified, 12 were included in this meta-analysis. Data regarding major ventricular tachyarrhythmic events were retrieved in 817 subjects from the studies. For the variables, a combined odds ratio (OR) was calculated using a fixed-effects meta-analysis. Results: Extensive right ventricular dysfunction (OR, 2.44), ventricular late potential (OR, 1.66), inducible ventricular tachyarrhythmia during electrophysiology study (OR, 3.67), non-sustained ventricular tachycardia (OR, 3.78), and history of fatal event/sustained VT (OR, 5.66) identified as significant risk factors (p<0.0001). Conclusion: This meta-analysis shows that extensive right ventricular dysfunction, ventricular late potential, inducible ventricular tachyarrhythmia during electrophysiological study, non-sustained ventricular tachycardia, and history of sustained ventricular tachycardia/fibrillation are consistently reported risk factors of major ventricular tachyarrhythmic events indicating implantable cardioverter defibrillator therapy in patients with ARVC.

Multiple Group Testing Procedures for Analysis of High-Dimensional Genomic Data

  • Ko, Hyoseok;Kim, Kipoong;Sun, Hokeun
    • Genomics & Informatics
    • /
    • v.14 no.4
    • /
    • pp.187-195
    • /
    • 2016
  • In genetic association studies with high-dimensional genomic data, multiple group testing procedures are often required in order to identify disease/trait-related genes or genetic regions, where multiple genetic sites or variants are located within the same gene or genetic region. However, statistical testing procedures based on an individual test suffer from multiple testing issues such as the control of family-wise error rate and dependent tests. Moreover, detecting only a few of genes associated with a phenotype outcome among tens of thousands of genes is of main interest in genetic association studies. In this reason regularization procedures, where a phenotype outcome regresses on all genomic markers and then regression coefficients are estimated based on a penalized likelihood, have been considered as a good alternative approach to analysis of high-dimensional genomic data. But, selection performance of regularization procedures has been rarely compared with that of statistical group testing procedures. In this article, we performed extensive simulation studies where commonly used group testing procedures such as principal component analysis, Hotelling's $T^2$ test, and permutation test are compared with group lasso (least absolute selection and shrinkage operator) in terms of true positive selection. Also, we applied all methods considered in simulation studies to identify genes associated with ovarian cancer from over 20,000 genetic sites generated from Illumina Infinium HumanMethylation27K Beadchip. We found a big discrepancy of selected genes between multiple group testing procedures and group lasso.