• Title/Summary/Keyword: Data Value

Search Result 16,677, Processing Time 0.039 seconds

A Hybrid Value Predictor using Speculative Update in Superscalar Processors (슈퍼스칼라 프로세서에서 모험적 갱신을 사용한 하이브리드 결과값 예측기)

  • Park, Hong-Jun;Sin, Yeong-Ho;Jo, Yeong-Il
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.11
    • /
    • pp.592-600
    • /
    • 2001
  • To improve the performance of wide-issue Superscalar microprocessors, it is essential to increase the width of instruction fetch and issue rate. Data dependences are major hurdle to exploit ILP(Instruction-Level Parallelism) efficiently, so several related works have suggested that the limits imposed by data dependences can be overcome to some extent with the use of the data value prediction. But the suggested mechanisms may access the same value prediction table entry again before they have been updated with a real data value. They will cause incorrect value prediction by using stable data and incur misprediction penalty and lowering performance. In this paper, we propose a new hybrid value predictor which achieve high performance by reducing stale data. Because the proposed hybrid value predictor can update the prediction table speculatively, it efficiently reduces the number of mispredicted instruction due to stable due to stale data. For SPECint95 benchmark programs on the 16-issue superscalar processors, simulation results show that the average prediction accuracy increase from 59% for non-speculative update to 72% for speculative update.

  • PDF

Comparison of RAM Target Value and Operation Data in Air Weapon Systems (야전운용자료를 활용한 항공무기체계의 RAM 목표값 비교분석)

  • Kim, In Seok;Jung, Won
    • Journal of Applied Reliability
    • /
    • v.15 no.4
    • /
    • pp.282-288
    • /
    • 2015
  • Purpose : The purpose of this study is to compare the RAM (reliability, availability and maintainability) value in the acquisition phase with operational period for air weapon systems. The objective is to determine if the value of RAM is sufficient in the field, and look for any difference from the target value to some extent. Methods : For a case study, the ${\bigcirc}{\bigcirc}$ training aircraft is selected. Data from the two acquisition sources are utilized. One is the operational data in domestic aircraft through research and development, and the other is the data from imported aircraft. The two different sources were collected independently and distinctly. Results : According to the analysis, the domestic aircraft shows high deviation in RAM value compare to the imported systems. This is due to the effort of continuous reliability improvement. In the aspect of maintainability, the result shows a slight deviation, and the availability meets the requirement. Conclusion : The results of this study can be used in finding a way that can be effectively applied to the sustainability in the weapon system. If the RAM performance is significantly lower than the target value, then it is necessary to improve the design activities so that they can achieve the RAM target value.

Interpretation of Analytical Data of Ion Components in Precipitation, Seoul (서울 地域 降水中 이온成分 分析資料의 解析)

  • 강공언;이주희;김희강
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.12 no.3
    • /
    • pp.323-332
    • /
    • 1996
  • Precipitation samples were collected by the wet-only sampling method at Seoul from September 1993 to June 1995. Sample were analysed for the anions $(NO_3^-, NO_2^-, SO_4^{2-}, Cl^-, and F^-)$ and cations $(Na^+, K^+, Ca^{2+}, Mg^{2+}, and NH_4^+)$ in addition to pH and electric conductivity. In order to establish the chemical analysis data of high quality, the assurance checks for analytical data of precipitation were performed by considering the ion balance and by comparing the measured conductivity with the calculated conductivity. As we applied the various assurance checking methods by the ion balance used until recently to a data set measured in this study, the f value expressed as $\Sigma C/\Sigma A$ was found to be not appropriate for the data screening. Also, the scattering plot between cations and anions in each sample was found to show the general tendency of ion balance but was proved to not quantitate the standard of data screening at a data set of samples of various concentration levels. The h value defined as (A-C)/C for C $\geq$ A and (A-C)/A for C < A was used to check the ion balance. However, the standard of data screening by h value must very in response to total ion concentration of samples. In this study, the quality assurance of chemical analysis data was checked by considering both the ion balance of evaluating by h value and the conductivity balance. Further the quality control was achieved by these quality assurance methods. As the result, 67 samples among total 77 were obtained as valid. As the central tendency value for a statistical summary in the analytical parametr of samples, the volume-weighted mean value was found to represent more the general chemistry of precipitation rather than the arithmetic mean. The volume-weighted mean pH was 5.0 and 25% of samples was less than this mean. The concentrations of sufate and nitrate in precipitation were 90.4 ueq/L and 32.4 ueq/L which made up 59% and 21% of all anions. The raion of $SO_4^{2-}/(NO_3^- + NO_2^-)$ in precipitation was 2.7, which indicates that the contributions of $H_2SO_4$ and $HNO_3$ to the acidity of precipitation are 70% and 30%, respectively.

  • PDF

A Study on Expansion Proposal of Data Dividend Qualification Based on the Contribution of Platform Workers (플랫폼 노동자의 기여에 따른 데이터 배당 자격 확대 제안 연구)

  • CHOI, Seoyeon;SHIN, Seoung-Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.187-193
    • /
    • 2021
  • In February 2020, Gyeonggi-do paid the world's first Data Dividend to local residents of Gyeonggi Province who produced data using local currency. Currently, Data Dividend is being paid to consumers who have produced data, but this paper studied the expansion of Data Dividend qualifications according to the contribution to creating added value. First, it raised the question of whether it is right for the recipient of Data Dividend to have only the consumers who produced the data. Second, by analyzing the four elements of data valuation criteria that influenced the creation of added value identified the objects that influence the creation of added value. The 4 factors were divided into productivity, effectiveness, concreteness, and usability, and the objects corresponding to each factor were analyzed. Accordingly, it was determined whether platform workers contributed to the creation of added value. In conclusion, it was confirmed that not only consumers, who were the first data producers, but also platform workers who contributed to the concreteness of data valuation to create added value can qualify for Data Dividend. Since this paper focuses on the necessity of data allocation centered on platform workers among the objects, the validity of objects that influence added value other than platform workers are excluded.

Robust Singular Value Decomposition BaLsed on Weighted Least Absolute Deviation Regression

  • Jung, Kang-Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.6
    • /
    • pp.803-810
    • /
    • 2010
  • The singular value decomposition of a rectangular matrix is a basic tool to understand the structure of the data and particularly the relationship between row and column factors. However, conventional singular value decomposition used the least squares method and is not robust to outliers. We propose a simple robust singular value decomposition algorithm based on the weighted least absolute deviation which is not sensitive to leverage points. Its implementation is easy and the computation time is reasonably low. Numerical results give the data structure and the outlying information.

Filtering Correction Method and Performance Comparison for Time Series Data

  • Baek, Jongwoo;Choi, Jiyoung;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.125-130
    • /
    • 2022
  • In modern society, as many data are used for research or commercial purposes, the value of data is gradually increasing. In related fields, research is being actively conducted to collect valuable data, but it is difficult to collect proper data because the value of collection is determined according to the performance of existing sensors. To solve this problem, a method to effectively reduce noise has been proposed, but there is a point in which performance is degraded due to damage caused by noise. In this paper, a device capable of collecting time series data was designed to correct such data noise, and a correction technique was performed by giving an error value based on the representatively collected ultrafine dust data, and then comparing before and after Compare performance. For the correction method, Kalman, LPF, Savitzky-Golay, and Moving Average filter were used. Savitzky-Golay filter and Moving Average Filter showed excellent correction rate as an experiment. Through this, the performance of the sensor can be supplemented and it is expected that data can be effectively collected.

Neural Network for Softwar Reliability Prediction ith Unnormalized Data (비정규화 데이터를 이용한 신경망 소프트웨어 신뢰성 예측)

  • Lee, Sang-Un
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1419-1425
    • /
    • 2000
  • When we predict of software reliability, we can't know the testing stopping time and how many faults be residues in software the (the maximum value of data) during these software testing process, therefore we assume the maximum value and the training result can be inaccuracy. In this paper, we present neural network approach for software reliability prediction with unnormalized (actual or original collected) data. This approach is not consider the maximum value of data and possible use the network without normalizing but the predictive accuracy is better. Also, the unnormalized method shows better predictive accuracy than the normalized method given by maximum value. Therefore, we can make the best use of this model in software reliability prediction using unnormalized data.

  • PDF

Determinants of Firm Value and Profitability: Evidence from Indonesia

  • SUDIYATNO, Bambang;PUSPITASARI, Elen;SUWARTI, Titiek;ASYIF, Maulana Muhammad
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.11
    • /
    • pp.769-778
    • /
    • 2020
  • The purpose of this study was to examine the role of profitability as a mediating variable in influencing firm value. This study uses a sample of manufacturing companies listed on the Indonesia Stock Exchange from 2016 to 2018. The data used is panel data, with data analysis using multiple regression. Based on the Sobel test, profitability plays a role in mediating the effect of firm size on firm value. The effect of firm size on firm value is indirect, however, through profitability. Therefore, the market price of the shares of large-scale companies will increase if the resulting profitability is high. The capital structure and managerial ownership directly influence firm value. The results showed that managerial ownership and firm size had a positive effect on profitability, while capital structure had no effect on profitability. Capital structure and managerial ownership have a negative effect on firm value, while firm size and profitability have a positive effect on firm value. The main finding of this study is that profitability acts as an intervening variable in mediating the relationship between firm size and firm value.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

The influence of consumption values on fast fashion brand purchases (소비가치가 패스트 패션 브랜드 구매에 미치는 영향)

  • Park, Hye-Jung
    • The Research Journal of the Costume Culture
    • /
    • v.23 no.3
    • /
    • pp.468-483
    • /
    • 2015
  • Fast fashion brand marketers should develop marketing strategies that effectively satisfy the values consumers seek when purchasing fast fashion brands. This study aimed to identify the consumption value factors of fast fashion brands and to reveal the value factors that influence attitudes toward purchasing fast fashion brands. Data were gathered by surveying university students in the Seoul metropolitan area using convenience sampling. Three hundred and five questionnaires were used in the statistical analysis, which consisted of exploratory factor analysis using SPSS and confirmatory factor analysis and path analysis using AMOS. The factor analysis revealed the following six value factors: Emotional value, social value, price/value for money, durability value, eco-value, and consistency value. The fit statistic for the six-factor model was quite acceptable. Two of the six value factors, emotional value and price/value for money, positively influenced attitudes toward purchasing fast fashion brands. The overall fits of the revealed model suggested that the model fit the data well. The results suggested that fast fashion marketers need to understand the value factors that motivate consumers to purchase fast fashion brands. In addition, marketers should focus their efforts on satisfying emotional value and price/value for money in order to establish their brands in the increasingly competitive fast fashion industry.