• Title/Summary/Keyword: analysis data

Search Result 85,083, Processing Time 0.079 seconds

Improving the Subject Independent Classification of Implicit Intention By Generating Additional Training Data with PCA and ICA

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.14 no.4
    • /
    • pp.24-29
    • /
    • 2018
  • EEG-based brain-computer interfaces has focused on explicitly expressed intentions to assist physically impaired patients. For EEG-based-computer interfaces to function effectively, it should be able to understand users' implicit information. Since it is hard to gather EEG signals of human brains, we do not have enough training data which are essential for proper classification performance of implicit intention. In this paper, we improve the subject independent classification of implicit intention through the generation of additional training data. In the first stage, we perform the PCA (principal component analysis) of training data in a bid to remove redundant components in the components within the input data. After the dimension reduction by PCA, we train ICA (independent component analysis) network whose outputs are statistically independent. We can get additional training data by adding Gaussian noises to ICA outputs and projecting them to input data domain. Through simulations with EEG data provided by CNSL, KAIST, we improve the classification performance from 65.05% to 66.69% with Gamma components. The proposed sample generation method can be applied to any machine learning problem with fewer samples.

A guideline for the statistical analysis of compositional data in immunology

  • Yoo, Jinkyung;Sun, Zequn;Greenacre, Michael;Ma, Qin;Chung, Dongjun;Kim, Young Min
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.4
    • /
    • pp.453-469
    • /
    • 2022
  • The study of immune cellular composition has been of great scientific interest in immunology because of the generation of multiple large-scale data. From the statistical point of view, such immune cellular data should be treated as compositional. In compositional data, each element is positive, and all the elements sum to a constant, which can be set to one in general. Standard statistical methods are not directly applicable for the analysis of compositional data because they do not appropriately handle correlations between the compositional elements. In this paper, we review statistical methods for compositional data analysis and illustrate them in the context of immunology. Specifically, we focus on regression analyses using log-ratio transformations and the alternative approach using Dirichlet regression analysis, discuss their theoretical foundations, and illustrate their applications with immune cellular fraction data generated from colorectal cancer patients.

Agent with Low-latency Overcoming Technique for Distributed Cluster-based Machine Learning

  • Seo-Yeon, Gu;Seok-Jae, Moon;Byung-Joon, Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.157-163
    • /
    • 2023
  • Recently, as businesses and data types become more complex and diverse, efficient data analysis using machine learning is required. However, since communication in the cloud environment is greatly affected by network latency, data analysis is not smooth if information delay occurs. In this paper, SPT (Safe Proper Time) was applied to the cluster-based machine learning data analysis agent proposed in previous studies to solve this delay problem. SPT is a method of remotely and directly accessing memory to a cluster that processes data between layers, effectively improving data transfer speed and ensuring timeliness and reliability of data transfer.

Applying Bootstrap to Time Series Data Having Trend (추세 시계열 자료의 부트스트랩 적용)

  • Park, Jinsoo;Kim, Yun Bae;Song, Kiburm
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.38 no.2
    • /
    • pp.65-73
    • /
    • 2013
  • In the simulation output analysis, bootstrap method is an applicable resampling technique to insufficient data which are not significant statistically. The moving block bootstrap, the stationary bootstrap, and the threshold bootstrap are typical bootstrap methods to be used for autocorrelated time series data. They are nonparametric methods for stationary time series data, which correctly describe the original data. In the simulation output analysis, however, we may not use them because of the non-stationarity in the data set caused by the trend such as increasing or decreasing. In these cases, we can get rid of the trend by differencing the data, which guarantees the stationarity. We can get the bootstrapped data from the differenced stationary data. Taking a reverse transform to the bootstrapped data, finally, we get the pseudo-samples for the original data. In this paper, we introduce the applicability of bootstrap methods to the time series data having trend, and then verify it through the statistical analyses.

Considerations for generating meaningful HRA data: Lessons learned from HuREX data collection

  • Kim, Yochan
    • Nuclear Engineering and Technology
    • /
    • v.52 no.8
    • /
    • pp.1697-1705
    • /
    • 2020
  • To enhance the credibility of human reliability analysis, various kinds of data have been recently collected and analyzed. Although it is obvious that the quality of data is critical, the practices or considerations for securing data quality have not been sufficiently discussed. In this work, based on the experience of the recent human reliability data extraction projects, which produced more than fifty thousand data-points, we derive a number of issues to be considered for generating meaningful data. As a result, thirteen considerations are presented here as pertaining to the four different data extraction activities: preparation, collection, analysis, and application. Although the lessons were acquired from a single kind of data collection framework, it is believed that these results will guide researchers to consider important issues in the process of extracting data.

Buying Pattern Discovery Using Spatio-Temporal Data Mart and Visual Analysis (고객군의 지리적 패턴 발견을 위한 데이터마트 구현과 시각적 분석에 관한 연구)

  • Cho, Jae-Hee;Ha, Byung-Kook
    • Journal of Information Technology Services
    • /
    • v.9 no.1
    • /
    • pp.127-139
    • /
    • 2010
  • Due to the development of information technology and business related to geographical location of customer, the need for the storage and analysis of geographical location data is increasing rapidly. Geographical location data have a spatio-temporal nature which is different from typical business data. Therefore, different methods of data storage and analysis are required. This paper proposes a multi-dimensional data model and data visualization to analyze geographical location data efficiently and effectively. Purchase order data of an online farm products brokerage business was used to build prototype datamart. RFM scores are calculated to classify customers and geocoding technology is applied to display information on maps, thereby to enhance data visualization.

A Study on Elementary Education Examples for Data Science using Entry (엔트리를 활용한 초등 데이터 과학 교육 사례 연구)

  • Hur, Kyeong
    • Journal of The Korean Association of Information Education
    • /
    • v.24 no.5
    • /
    • pp.473-481
    • /
    • 2020
  • Data science starts with small data analysis and includes machine learning and deep learning for big data analysis. Data science is a core area of artificial intelligence technology and should be systematically reflected in the school curriculum. For data science education, The Entry also provides a data analysis tool for elementary education. In a big data analysis, data samples are extracted and analysis results are interpreted through statistical guesses and judgments. In this paper, the big data analysis area that requires statistical knowledge is excluded from the elementary area, and data science education examples focusing on the elementary area are proposed. To this end, the general data science education stage was explained first, and the elementary data science education stage was newly proposed. After that, an example of comparing values of data variables and an example of analyzing correlations between data variables were proposed with public small data provided by Entry, according to the elementary data science education stage. By using these Entry data-analysis examples proposed in this paper, it is possible to provide data science convergence education in elementary school, with given data generated from various subjects. In addition, data science educational materials combined with text, audio and video recognition AI tools can be developed by using the Entry.

An Insight Study on Keyword of IoT Utilizing Big Data Analysis (빅데이터 분석을 활용한 사물인터넷 키워드에 관한 조망)

  • Nam, Soo-Tai;Kim, Do-Goan;Jin, Chan-Yong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.146-147
    • /
    • 2017
  • Big data analysis is a technique for effectively analyzing unstructured data such as the Internet, social network services, web documents generated in the mobile environment, e-mail, and social data, as well as well formed structured data in a database. The most big data analysis techniques are data mining, machine learning, natural language processing, and pattern recognition, which were used in existing statistics and computer science. Global research institutes have identified analysis of big data as the most noteworthy new technology since 2011. Therefore, companies in most industries are making efforts to create new value through the application of big data. In this study, we analyzed using the Social Matrics which a big data analysis tool of Daum communications. We analyzed public perceptions of "Internet of things" keyword, one month as of october 8, 2017. The results of the big data analysis are as follows. First, the 1st related search keyword of the keyword of the "Internet of things" has been found to be technology (995). This study suggests theoretical implications based on the results.

  • PDF

A Case Study on Product Production Process Optimization using Big Data Analysis: Focusing on the Quality Management of LCD Production (빅데이터 분석 적용을 통한 공정 최적화 사례연구: LCD 공정 품질분석을 중심으로)

  • Park, Jong Tae;Lee, Sang Kon
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.97-107
    • /
    • 2022
  • Recently, interest in smart factories is increasing. Investments to improve intelligence/automation are also being made continuously in manufacturing plants. Facility automation based on sensor data collection is now essential. In addition, we are operating our factories based on data generated in all areas of production, including production management, facility operation, and quality management, and an integrated standard information system. When producing LCD polarizer products, it is most important to link trace information between data generated by individual production processes. All systems involved in production must ensure that there is no data loss and data integrity is ensured. The large-capacity data collected from individual systems is composed of key values linked to each other. A real-time quality analysis processing system based on connected integrated system data is required. In this study, large-capacity data collection, storage, integration and loss prevention methods were presented for optimization of LCD polarizer production. The identification Risk model of inspection products can be added, and the applicable product model is designed to be continuously expanded. A quality inspection and analysis system that maximizes the yield rate was designed by using the final inspection image of the product using big data technology. In the case of products that are predefined as analysable products, it is designed to be verified with the big data knn analysis model, and individual analysis results are continuously applied to the actual production site to operate in a virtuous cycle structure. Production Optimization was performed by applying it to the currently produced LCD polarizer production line.

A Study on the Data-Based Organizational Capabilities by Convergence Capabilities Level of Public Data (공공데이터 융합역량 수준에 따른 데이터 기반 조직 역량의 연구)

  • Jung, Byoungho;Joo, Hyungkun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.4
    • /
    • pp.97-110
    • /
    • 2022
  • The purpose of this study is to analyze the level of public data convergence capabilities of administrative organizations and to explore important variables in data-based organizational capabilities. The theoretical background was summarized on public data and use activation, joint use, convergence, administrative organization, and convergence constraints. These contents were explained Public Data Act, the Electronic Government Act, and the Data-Based Administrative Act. The research model was set as the data-based organizational capabilities effect by a data-based administrative capability, public data operation capabilities, and public data operation constraints. It was also set whether there is a capabilities difference data-based on an organizational operation by the level of data convergence capabilities. This study analysis was conducted with hierarchical cluster analysis and multiple regression analysis. As the research result, First, hierarchical cluster analysis was classified into three groups. It was classified into a group that uses only public data and structured data, a group that uses public data on both structured and unstructured data, and a group that uses both public and private data. Second, the critical variables of data-based organizational operation capabilities were found in the data-based administrative planning and administrative technology, the supervisory organizations and technical systems by public data convergence, and the data sharing and market transaction constraints. Finally, the essential independent variables on data-based organizational competencies differ by group. This study contributed. As a theoretical implication, this research is updated on management information systems by explaining the Public Data Act, the Electronic Government Act, and the Data-Based Administrative Act. As a practical implication, the activity reinforcement of public data should be promoting the establishment of data standardization and search convenience and elimination of the lukewarm attitudes and Selfishness behavior for data sharing.