• Title/Summary/Keyword: data set

Search Result 11,005, Processing Time 0.039 seconds

Study on Patient Outcomes through the Construction of Korean Nursing Minimum Data Set (NMDS) (한국형 Nursing Minimum Data Set(NMDS)구축을 통한 환자결과에 대한 연구)

  • Lee, Eun-Joo
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.12 no.1
    • /
    • pp.14-22
    • /
    • 2006
  • Purpose: The purpose of this study is developing the nursing information system which contains the core elements of nursing practice, the Nursing Minimum Data Set (NMDS) that should be collected and documented all the settings in which nursing care is provided. Method: The program was developed under the hospital information system by TCP/IP protocol and used NANDA, Nursing Interventions Classification (NIC), and Nursing Outcomes Classification (NOC) to fill out the elements of NMDS. The Oracle was used as DBMS under the Windows 98 environment and Power Builder 5.0 was used as a program language. Results: This study developed linkage among the NANDA-NOC-NIC to facilitate choosing correct nursing diagnosis, interventions, and outcomes and stimulate nurses' critical thinking. Also the system developed includes nursing care sensitive patient outcomes, so nurses can actively involve in nursing effectiveness research by analyzing the data stored in the database or by making relational databases with other health care related databases. Conclusion: The program developed in this study ultimately can be used for the nursing research, policy development, reimbursement of nursing care, and calculating staffing and nursing skill mix by providing tool to describe and organize nursing practice and measure the nursing care effectiveness.

  • PDF

Pathway Retrieval for Transcriptome Analysis using Fuzzy Filtering Technique andWeb Service

  • Lee, Kyung-Mi;Lee, Keon-Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.167-172
    • /
    • 2012
  • In biology the advent of the high-throughput technology for sequencing, probing, or screening has produced huge volume of data which could not be manually handled. Biologists have resorted to software tools in order to effectively handle them. This paper introduces a bioinformatics tool to help biologists find potentially interesting pathway maps from a transcriptome data set in which the expression levels of genes are described for both case and control samples. The tool accepts a transcriptome data set, and then selects and categorizes some of genes into four classes using a fuzzy filtering technique where classes are defined by membership functions. It collects and edits the pathway maps related to those selected genes without analyst' intervention. It invokes a sequence of web service functions from KEGG, which an online pathway database system, in order to retrieve related information, locate pathway maps, and manipulate them. It maintains all retrieved pathway maps in a local database and presents them to the analysts with graphical user interface. The tool has been successfully used in identifying target genes for further analysis in transcriptome study of human cytomegalovirous. The tool is very helpful in that it can considerably save analysts' time and efforts by collecting and presenting the pathway maps that contain some interesting genes, once a transcriptome data set is just given.

Precise Geoid Model for Korea from Gravity and GPS Data

  • Choi, Kwang-Sun;Won, Ji-Hoon;Shin, Young-Hong
    • Journal of the Korean Geophysical Society
    • /
    • v.9 no.3
    • /
    • pp.181-188
    • /
    • 2006
  • The data, methodology, and the resulting accurate gravimetric geoid model for the Korean Peninsula (latitude from 32˚ N to 40˚ N and longitude from 124˚ E to 131˚ E) are presented in this study. The types of used data were a high degree geopotential model (the EGM96 spherical harmonic coefficient set), a set of 12,615 land gravity observations, 1,056,075 shipborne gravity observations, and KMS2002 gravity anomalies from satellite altimetry. The remove-restore technique was successfully applied to combining the above mentioned data sets using up to degree and order 112 of the EGM96 coefficient. The residual geoid was calculated with residual Free-Air anomaly values using the spherical Stokes' formula with a 37-km integration cap radius. The geoid model was referred to WGS84 geodetic system and was tested using a set of GPS/levelling geoid undulations. The absolute accuracy is 0.132 m and some improvement compared to the PNU95 geoid model was found.

  • PDF

Key Audit Matters Readability and Investor Reaction

  • CHIRAKOOL, Wichuta;POONPOOL, Nuttavong;WANGCHAROENDATE, Suwan;BHONGCHIRAWATTANA, Utis
    • Journal of Distribution Science
    • /
    • v.20 no.9
    • /
    • pp.73-81
    • /
    • 2022
  • Purpose: This study aimed to examine whether key audit matters (KAMs) readability influences investor reaction. Research design, data, and methodology: The signaling theory was applied to explain the behavior of investors when they receive useful information for their decisions. Data were collected from 1,866 firm-year observations from Thai listed companies in both the Stock Exchange of Thailand (SET) and the Market for Alternative Investment (MAI) for the fiscal years of 2016-2019. The study was based on secondary data, which were collected from the SET Market Analysis and Reporting Tool (SETSMART) database and the Stock Exchange of Thailand's website (www.set.or.th). A statistical regression method was used with panel data analysis to evaluate possible associations between KAMs readability and investor reaction. The study relied on popular readability measures (Fog Index). Moreover, investor reaction was measured by absolute cumulative abnormal return and abnormal trading volume. Results: It was found that the KAMs readability has positive significance on both absolute cumulative abnormal return and abnormal trading volume. Conclusion: This study showed a significant contribution to the implication of KAMs in an emerging economy. The results reveal that more readable KAMs disclosure distributed new insights and useful information to investors and led to reducing the information gap between auditors and investors.

A hybrid approach to predict the bearing capacity of a square footing on a sand layer overlying clay

  • Erdal Uncuoglu;Levent Latifoglu;Zulkuf Kaya
    • Geomechanics and Engineering
    • /
    • v.34 no.5
    • /
    • pp.561-575
    • /
    • 2023
  • This study investigates to provide a fast solution to the problem of bearing capacity in layered soils with easily obtainable parameters that does not require the use of any charts or calculations of different parameters. Therefore, a hybrid approach including both the finite element (FE) method and machine learning technique have been applied. Firstly, a FE model has been generated which is validated by the results of in-situ loading tests. Then, a total of 192 three-dimensional FE analyses have been performed. A data set has been created utilizing the soil properties, footing sizes, layered conditions used in the FE analyses and the ultimate bearing capacity values obtained from the FE analyses to be used in multigene genetic programming (MGGP). Problem has been modeled with five input and one output parameter to propose a bearing capacity formula. Ultimate bearing capacity values estimated from the proposed formula using data set consisting of 20 data independent of total data set used in MGGP modelling have been compared to the bearing capacities calculated with semi-empirical methods. It was observed that the MGGP method yielded successful results for the problem considered. The proposed formula provides reasonable predictions and efficient enough to be used in practice.

A Study on the Management of Stock Data with an Object Oriented Database Management System (객체지향 데이타베이스를 이용한 주식데이타 관리에 관한 연구)

  • 허순영;김형민
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.21 no.3
    • /
    • pp.197-214
    • /
    • 1996
  • Financial analysis of stock data usually involves extensive computation of large amount of time series data sets. To handle the large size of the data sets and complexity of the analyses, database management systems have been increasingly adaopted for efficient management of stock data. Specially, relational database management system is employed more widely due to its simplistic data management approach. However, the normalized two-dimensional tables and the structured query language of the relational system turn out to be less effective than expected in accommodating time series stock data as well as the various computational operations. This paper explores a new data management approach to stock data management on the basis of an object-oriented database management system (ODBMS), and proposes a data model supporting times series data storage and incorporating a set of financial analysis functions. In terms of functional stock data analysis, it specially focuses on a primitive set of operations such as variance of stock data. In accomplishing this, we first point out the problems of a relational approach to the management of stock data and show the strength of the ODBMS. We secondly propose an object model delineating the structural relationships among objects used in the stock data management and behavioral operations involved in the financial analysis. A prototype system is developed using a commercial ODBMS.

  • PDF

Storage Policies for Versions Management of XML Documents using a Change Set (변경 집합을 이용한 XML 문서의 버전 관리를 위한 저장 기법)

  • Yun Hong Won
    • The KIPS Transactions:PartD
    • /
    • v.11D no.7 s.96
    • /
    • pp.1349-1356
    • /
    • 2004
  • The interest of version management is increasing in electronic commerce requiring data mining and documents processing system related to digital governmentapplications. In this paper, we define a change set that is to manage historicalinformation and to maintain XML documents during a long period of time and propose several storage policies of XML documents using a change set. A change set includes a change oper-ation set and temporal dimensions and a change operation set is composed with schema change operations and data change operations. We pro-pose three storage policies using a change set. Three storage policies are (1) storing all the change sets, (2) storing the change sets and the versions periodically. (3) storing the aggregation of change sets and the versions at a point of proper time. Also, we compare the performance between the existing storage policy and the proposed storage policies. Though the performance evaluation, we show that the method to store the aggregation of change sets and the versions at a point of proper time outperforms others.

Improving the Performance of Korean Text Chunking by Machine learning Approaches based on Feature Set Selection (자질집합선택 기반의 기계학습을 통한 한국어 기본구 인식의 성능향상)

  • Hwang, Young-Sook;Chung, Hoo-jung;Park, So-Young;Kwak, Young-Jae;Rim, Hae-Chang
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.9
    • /
    • pp.654-668
    • /
    • 2002
  • In this paper, we present an empirical study for improving the Korean text chunking based on machine learning and feature set selection approaches. We focus on two issues: the problem of selecting feature set for Korean chunking, and the problem of alleviating the data sparseness. To select a proper feature set, we use a heuristic method of searching through the space of feature sets using the estimated performance from a machine learning algorithm as a measure of "incremental usefulness" of a particular feature set. Besides, for smoothing the data sparseness, we suggest a method of using a general part-of-speech tag set and selective lexical information under the consideration of Korean language characteristics. Experimental results showed that chunk tags and lexical information within a given context window are important features and spacing unit information is less important than others, which are independent on the machine teaming techniques. Furthermore, using the selective lexical information gives not only a smoothing effect but also the reduction of the feature space than using all of lexical information. Korean text chunking based on the memory-based learning and the decision tree learning with the selected feature space showed the performance of precision/recall of 90.99%/92.52%, and 93.39%/93.41% respectively.

The extension of the largest generalized-eigenvalue based distance metric Dij1) in arbitrary feature spaces to classify composite data points

  • Daoud, Mosaab
    • Genomics & Informatics
    • /
    • v.17 no.4
    • /
    • pp.39.1-39.20
    • /
    • 2019
  • Analyzing patterns in data points embedded in linear and non-linear feature spaces is considered as one of the common research problems among different research areas, for example: data mining, machine learning, pattern recognition, and multivariate analysis. In this paper, data points are heterogeneous sets of biosequences (composite data points). A composite data point is a set of ordinary data points (e.g., set of feature vectors). We theoretically extend the derivation of the largest generalized eigenvalue-based distance metric Dij1) in any linear and non-linear feature spaces. We prove that Dij1) is a metric under any linear and non-linear feature transformation function. We show the sufficiency and efficiency of using the decision rule $\bar{{\delta}}_{{\Xi}i}$(i.e., mean of Dij1)) in classification of heterogeneous sets of biosequences compared with the decision rules min𝚵iand median𝚵i. We analyze the impact of linear and non-linear transformation functions on classifying/clustering collections of heterogeneous sets of biosequences. The impact of the length of a sequence in a heterogeneous sequence-set generated by simulation on the classification and clustering results in linear and non-linear feature spaces is empirically shown in this paper. We propose a new concept: the limiting dispersion map of the existing clusters in heterogeneous sets of biosequences embedded in linear and nonlinear feature spaces, which is based on the limiting distribution of nucleotide compositions estimated from real data sets. Finally, the empirical conclusions and the scientific evidences are deduced from the experiments to support the theoretical side stated in this paper.

A Study on the Incomplete Information Processing System(INiPS) Using Rough Set

  • Jeong, Gu-Beom;Chung, Hwan-Mook;Kim, Guk-Boh;Park, Kyung-Ok
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.243-251
    • /
    • 2000
  • In general, Rough Set theory is used for classification, inference, and decision analysis of incomplete data by using approximation space concepts in information system. Information system can include quantitative attribute values which have interval characteristics, or incomplete data such as multiple or unknown(missing) data. These incomplete data cause the inconsistency in information system and decrease the classification ability in system using Rough Sets. In this paper, we present various types of incomplete data which may occur in information system and propose INcomplete information Processing System(INiPS) which converts incomplete information system into complete information system in using Rough Sets.

  • PDF