• Title/Summary/Keyword: Data analysis using software

Search Result 2,292, Processing Time 0.028 seconds

Study of Analysis Software for Event Recorder in High Speed Railway (고속전철용 Event Recorder를 위한 분석도구 소프트웨어 연구)

  • Song, Gyu-Youn;Lee, Sang-Nam;Ryu, Hee-Moon;Kim, Kwang-Yul;Han, Kwang-Rok
    • Proceedings of the KSR Conference
    • /
    • 2009.05b
    • /
    • pp.341-347
    • /
    • 2009
  • In high speed railway, event recorder system stores a train speed and the related data for train operation in real time. Using those information, we can analysis the train operation and the reason of train accident. Analysis software gets the stored data from Event Recorder and shows the status of various signals related with train operation. Using it, also we can analysis the train operation before and after the given time. In this paper we propose the analysis software to show and analysis the operation of high speed train. The method of transferring the stored data from Event Recorder into Analysis Software is proposed. We develop the efficient procedure to store the transferred data into analysis system. Also the effective method to show the store data and to analysis them is studied for finding the cause of train accident.

  • PDF

A Framework for Detecting Data Races in Weapon Software (무기체계 소프트웨어의 자료경합을 탐지하기 위한 프레임워크)

  • Oh, Jin-Woo;Choi, Eu-Teum;Jun, Yong-Kee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.6
    • /
    • pp.305-312
    • /
    • 2018
  • Software has been used to develop many functions of the modern weapon systems which has a high mission criticality. Weapon system software must consider multi-threaded processing to satisfy growing performance requirement. However, developing multi-threaded programs are difficult because of concurrency faults, such as unintended data races. Especially, it is important to prepare analysis for debugging the data races, because the weapon system software may cause personal injury. In this paper, we present an efficient framework of analysis, called ConDeWS, which is designed to determine the scope of dynamic analysis through using the result of static analysis and fault analysis. As a result of applying the implemented framework to the target software, we have detected unintended data races that were not detected in the static analysis.

Priority Analysis for Software Functions Using Social Network Analysis and DEA(Data Envelopment Analysis) (사회연결망 분석과 자료포락분석 기법을 이용한 소프트웨어 함수 우선순위 분석 연구)

  • Huh, Sang Moo;Kim, Woo Je
    • Journal of Information Technology Services
    • /
    • v.17 no.3
    • /
    • pp.171-189
    • /
    • 2018
  • To remove software defects and improve performance of software, many developers perform code inspections and use static analysis tools. A code inspection is an activity that is performed manually to detect software defects in the developed source. However, there is no clear criterion which source codes are inspected. A static analysis tool can automatically detect software defects by analyzing the source codes without running the source codes. However, it has disadvantage that analyzes only the codes in the functions without analyzing the relations among source functions. The functions in the source codes are interconnected and formed a social network. Functions that occupy critical locations in a network can be important enough to affect the overall quality. Whereas, a static analysis tool merely suggests which functions were called several times. In this study, the core functions will be elicited by using social network analysis and DEA (Data Envelopment Analysis) for CUBRID open database sources. In addition, we will suggest clear criteria for selecting the target sources for code inspection and will suggest ways to find core functions to minimize defects and improve performance.

Benchmark Dose Modeling of In Vitro Genotoxicity Data: a Reanalysis

  • Guo, Xiaoqing;Mei, Nan
    • Toxicological Research
    • /
    • v.34 no.4
    • /
    • pp.303-310
    • /
    • 2018
  • The methods of applied genetic toxicology are changing from qualitative hazard identification to quantitative risk assessment. Recently, quantitative analysis with point of departure (PoD) metrics and benchmark dose (BMD) modeling have been applied to in vitro genotoxicity data. Two software packages are commonly used for BMD analysis. In previous studies, we performed quantitative dose-response analysis by using the PROAST software to quantitatively evaluate the mutagenicity of four piperidine nitroxides with various substituent groups on the 4-position of the piperidine ring and six cigarette whole smoke solutions (WSSs) prepared by bubbling machine-generated whole smoke. In the present study, we reanalyzed the obtained genotoxicity data by using the EPA's BMD software (BMDS) to evaluate the inter-platform quantitative agreement of the estimates of genotoxic potency. We calculated the BMDs for 10%, 50%, and 100% (i.e., a two-fold increase), and 200% increases over the concurrent vehicle controls to achieve better discrimination of the dose-responses, along with their BMDLs (the lower 95% confidence interval of the BMD) and BMDUs (the upper 95% confidence interval of the BMD). The BMD values and rankings estimated in this study by using the EPA's BMDS were reasonably similar to those calculated in our previous studies by using PROAST. These results indicated that both software packages were suitable for dose-response analysis using the mouse lymphoma assay and that the BMD modeling results from these software packages produced comparable rank orders of the mutagenic potency.

An Evolution of Software Reliability in a Large Scale Switching System: using the software

  • Lee, Jae-Ki;Nam, Sang-Sik;Kim, Chang-Bong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4A
    • /
    • pp.399-414
    • /
    • 2004
  • In this paper, an evolution of software reliability engineering in a large-scale software project is summarized. The considered software consists of many components, called functional blocks in software of switching system. These functional blocks are served as the unit of coding and test, and the software is continuously updated by adding new functional blocks. We are mainly concerned with the analysis of the effects of these software components in software reliability and reliability evolution. We analyze the static characteristics of the software related to software reliability using collected failure data during system test. We also discussed a pattern which represents a local and global growth of the software reliability as version evolves. To find the pattern of system software, we apply the S-shaped model to a collection of failure data sets of each evolutionary version and the Goel-Okumoto(G-O) model to a grouped overall failure data set. We expect this pattern analysis will be helpful to plan and manage necessary human/resources fur a new similar software project which is developed under the same developing circumstances by estimating the total software failures with respect to its size and time.

A case study of MS Excel's powerful functions for statistical data analysis. (Focused on an Analysis of Variance menu) (자료 통계 분석을 위한 MS 엑셀의 유용한 기능들에 관한 사례연구 (지하철 이용객 자료 분석))

  • Kim, Sook-Young
    • Journal of the Korea Computer Industry Society
    • /
    • v.9 no.5
    • /
    • pp.223-228
    • /
    • 2008
  • A case study to show MS Excel's convenient and powerful functions was conducted to test hypotheses with subway data. Quantitative variables were described using descriptive menu, and qualitative variables were described using histogram menu of a MS Excel software. Relationships were tested using regression menu, differences were tested using t-test menu, and factors were tested using variance-layout menu of a Excel software. Data input, management, and statistical analysis were done successfully with only a MS Excel software.

  • PDF

The Comparative Study of NHPP Software Reliability Model Exponential and Log Shaped Type Hazard Function from the Perspective of Learning Effects (지수형과 로그형 위험함수 학습효과에 근거한 NHPP 소프트웨어 신뢰성장모형에 관한 비교연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.2
    • /
    • pp.1-10
    • /
    • 2012
  • In this study, software products developed in the course of testing, software managers in the process of testing software test and test tools for effective learning effects perspective has been studied using the NHPP software. The finite failure nonhomogeneous Poisson process models presented and the life distribution applied exponential and log shaped type hazard function. Software error detection techniques known in advance, but influencing factors for considering the errors found automatically and learning factors, by prior experience, to find precisely the error factor setting up the testing manager are presented comparing the problem. As a result, the learning factor is greater than autonomous errors-detected factor that is generally efficient model could be confirmed. This paper, a failure data analysis of applying using time between failures and parameter estimation using maximum likelihood estimation method, after the efficiency of the data through trend analysis model selection were efficient using the mean square error and coefficient of determination.

Qualitative Data Analysis using Computers (컴퓨터를 이용한 질적 자료 분석)

  • Yi Myung-Sun
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.6 no.3
    • /
    • pp.570-582
    • /
    • 1999
  • Although computers cannot analyze textual data in the same way as they analyze numerical data. they can nevertheless be of great assistance to qualitative researchers. Thus, the use of computers in analyzing qualitative data has increased since the 1980s. The purpose of this article was to explore advantages and disadvanteges of using computers to analyze textual data and to suggest strategies to prevent problems of using computers. In additon, it illustrated characteristics and functions of softwares designed to analyze qualitative data to help researchers choose the program wisely. It also demonstrated precise functions and procedures of the NUDIST program which was designed to develop a conceptual framework or grounded theory from unstructured data. Major advantage of using computers in qualitative research is the management of huge amount of unstructured data. By managing overloaded data, researcher can keep track of the emerging ideas, arguments and theoretical concepts and can organize these tasks mope efficiently than the traditional method of 'cut-and-paste' technique. Additional advantages are the abilities to increase trustworthiness of research, transparency of research process, and intuitional creativity of the researcher, and to facilitate team and secondary research. On the other hand, disvantages of using computers were identified as worries that the machine could conquer the human understanding and as probability of these problems. it suggested strategies such as 1) deep understanding of orthodoxy in analytical process. To overcome philosophical and theoretical background of qualitative research method, 2) deep understanding of the data as a whole before using software, 3) use of software after familiarity with it, 4) continuous evaluation of software and feedback from them, and 5) continuous awareness of the limitation of the machine, that is computer, in the interpretive analysis.

  • PDF

An Evolution of Reliability of large Scale Software of a Switching System (대형 교환 시스템의 소프트웨어 신뢰도 성장)

  • Lee, J.K.;Shin, S.K.;Nam, S.S.;Park, K.C.
    • Electronics and Telecommunications Trends
    • /
    • v.14 no.4 s.58
    • /
    • pp.1-9
    • /
    • 1999
  • In this paper, we summarize the lessons learned from the applications of the software reliability engineering to a large-scale software project. The considered software is the software system of the TDX-10 ISDN switching system. The considered software consists of many components, called functional blocks. These functional blocks serve as the unit of coding and test. The software is continuing to be developed by adding new functional blocks. We are mainly concerned with the analysis of the effects of these software components to software reliability and with the analysis of the reliability evolution. We analyze the static characteristics of the software related to software reliability using failure data collected during system test. We also discussed a pattern which represents a local and global growth of the software reliability as version evolves. To find the pattern of software of the TDX-10 ISDN system, we apply the S-shaped model to a collection of failure data sets of each evolutionary version and the Goel-Okumoto (G-O) model to a grouped overall failure data set. We expect this pattern analysis will be helpful to plan and manage necessary human/resources for a new similar software project which is developed under the same developing circumstances by estimating the total software failures with respect to its size and time.

Information Modeling for Finite Element Analysis Using STEP (STEP을 이용한 유한요소해석 정보모델 구축)

  • Choi, Young;Cho, Seong-Wook;Kwon, Ki-Eak
    • Korean Journal of Computational Design and Engineering
    • /
    • v.3 no.1
    • /
    • pp.48-56
    • /
    • 1998
  • Finite element analysis is very important in the design and analysis of mechanical engineering. The process of FEA encompasses shape modeling, mesh generation, matrix solving and post-processing. Some of these processes can be tightly integrated with the current software architectures and data sharing mode. However, complete integration of all the FEA process itself and the integration to the manufacturing processes is almost impossible in the current practice. The barriers to this problem are inconsistent data format and the enterprise-wise software integration technology. In this research, the information model based on STEP AP209 was chosen for handling finite element analysis data. The international standard for the FEA data can bridge the gap between design, analysis and manufacturing processes. The STEP-based FEA system can be further tightly integrated to the distributed software and database environment using CORBA technology. The prototype FEA system DICESS is implemented to verify the proposed concepts.

  • PDF