• Title/Summary/Keyword: Measure of prediction success

Search Result 11, Processing Time 0.026 seconds

A Split Criterion for Binary Decision Trees

  • Choi, Hyun Jip;Oh, Myong Rok
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.2
    • /
    • pp.411-423
    • /
    • 2002
  • In this paper, we propose a split criterion for binary decision trees. The proposed criterion selects the optimal split by measuring the prediction success of the candidate splits at a given node. The criterion is shown to have the property of exclusive preference. Examples are given to demonstrate the properties of the criterion.

Development of the Drop-outs Prediction Model for Intelligent Drop-outs Prevention System

  • Song, Mi-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.10
    • /
    • pp.9-17
    • /
    • 2017
  • The student dropout prediction is an indispensable for many intelligent systems to measure the educational system and success rate of all university. Therefore, in this paper, we propose an intelligent dropout prediction system that minimizes the situation by adopting the proactive process through an effective model that predicts the students who are at risk of dropout. In this paper, the main data sets for students dropout predictions was used as questionnaires and university information. The questionnaire was constructed based on theoretical and empirical grounds about factor affecting student's performance and causes of dropout. University Information included student grade, interviews, attendance in university life. Through these data sets, the proposed dropout prediction model techniques was classified into the risk group and the normal group using statistical methods and Naive Bays algorithm. And the intelligence dropout prediction system was constructed by applying the proposed dropout prediction model. We expect the proposed study would be used effectively to reduce the students dropout in university.

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

Prediction of Student's Interest on Sports for Classification using Bi-Directional Long Short Term Memory Model

  • Ahamed, A. Basheer;Surputheen, M. Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.246-256
    • /
    • 2022
  • Recently, parents and teachers consider physical education as a minor subject for students in elementary and secondary schools. Physical education performance has become increasingly significant as parents and schools pay more attention to physical schooling. The sports mining with distribution analysis model considers different factors, including the games, comments, conversations, and connection made on numerous sports interests. Using different machine learning/deep learning approach, children's athletic and academic interests can be tracked over the course of their academic lives. There have been a number of studies that have focused on predicting the success of students in higher education. Sports interest prediction research at the secondary level is uncommon, but the secondary level is often used as a benchmark to describe students' educational development at higher levels. An Automated Student Interest Prediction on Sports Mining using DL Based Bi-directional Long Short-Term Memory model (BiLSTM) is presented in this article. Pre-processing of data, interest classification, and parameter tweaking are all the essential operations of the proposed model. Initially, data augmentation is used to expand the dataset's size. Secondly, a BiLSTM model is used to predict and classify user interests. Adagrad optimizer is employed for hyperparameter optimization. In order to test the model's performance, a dataset is used and the results are analysed using precision, recall, accuracy and F-measure. The proposed model achieved 95% accuracy on 400th instances, where the existing techniques achieved 93.20% accuracy for the same. The proposed model achieved 95% of accuracy and precision for 60%-40% data, where the existing models achieved 93% for accuracy and precision.

Machine Learning-based MCS Prediction Models for Link Adaptation in Underwater Networks (수중 네트워크의 링크 적응을 위한 기계 학습 기반 MCS 예측 모델 적용 방안)

  • Byun, JungHun;Jo, Ohyun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • This paper proposes a link adaptation method for Underwater Internet of Things (IoT), which reduces power consumption of sensor nodes and improves the throughput of network in underwater IoT network. Adaptive Modulation and Coding (AMC) technique is one of link adaptation methods. AMC uses the strong correlation between Signal Noise Rate (SNR) and Bit Error Rate (BER), but it is difficult to apply in underwater IoT as it is. Therefore, we propose the machine learning based AMC technique for underwater environments. The proposed Modulation Coding and Scheme (MCS) prediction model predicts transmission method to achieve target BER value in underwater channel environment. It is realistically difficult to apply the predicted transmission method in real underwater communication in reality. Thus, this paper uses the high accuracy BER prediction model to measure the performance of MCS prediction model. Consequently, the proposed AMC technique confirmed the applicability of machine learning by increase the probability of communication success.

Development and Verification of the Fog Stability Index for Incheon International Airport based on the Measured Fog Characteristics (인천국제공항의 안개 특성에 따른 안개 안정 지수 FSI(Fog Stability Index) 개발 및 검증)

  • Song, Yunyoung;Yum, Seong Soo
    • Atmosphere
    • /
    • v.23 no.4
    • /
    • pp.443-452
    • /
    • 2013
  • The original Fog Stability Index (FSI) is formulated as FSI=$2(T-T_d)+2(T-T_{850})+WS_{850}$, where $T-T_d$ is dew point deficit (temperature-dew point temperature), $T-T_{850}$ is atmospheric stability measure (temperature-temperature at 850 hPa altitude) and $WS_{850}$ is wind speed at 850 hPa altitude. As a way to improve fog prediction at Incheon International Airport (IIA), we develop the modified FSI for IIA, using the meteorological data at IIA for two years from June 2011 to May 2013, the first one year for development and the second one year for validation. The relative contribution of the three parameters of the modified FSI is 9: 1: 0, indicating that $WS_{850}$ is found to be a non-contributing factor for fog formation at IIA. The critical success index (CSI) of the modified FSI is 0.68. Further development is made to consider the fact that fogs at IIA are highly influenced by advection of moisture from the Yellow Sea. One added parameter after statistical evaluation of the several candidate parameters is the dew point deficit at a buoy over the Yellow Sea. The relative contribution of the four parameters (including the new one) of the newly developed FSI is 10: 2: 0.5: 6.4. The CSI of the new FSI is 0.50. Since the developmental period of one year is too short, the FSI should be refined more as the data are accumulated more.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

A Study of Applications of 3D Body Scanning Technology - Focused on Apparel Industry - (3차원 바디 스캐너를 활용한 가상착의에 관한 인식 조사 - 업체 실무자 및 소비자를 대상으로 -)

  • Paek, Kyung-Ja;Lee, Jeong-Ran;Kim, Mi-Sung
    • Korean Journal of Human Ecology
    • /
    • v.18 no.3
    • /
    • pp.719-727
    • /
    • 2009
  • The ultimate success of commercial applications of body scan data in the apparel industry will be consumers' substantial applications such as automated custom fit, size prediction, virtual try-on, personal shopper services (Loker, S. et al., 2004). In this study, we surveyed fifty consumers and forty-seven apparel industry workers about their recognition and interest in 3D body scanning and virtual try-on. The results are as follows: 55% of the apparel industry workers has recognized 3D body scanning as a convenient technology, but do not know how to use it. To the questions regarding virtual try-on, 53% of the workers give positive answers. The consumers have a more positive view on virtual try-on than the workers do. The workers predict that the application of 3D body scan technology to the apparel industry could offer customers helpful information in their clothing selection by using virtual images of various size and style, and increase mass production of MTM(Made-To-Measure). The answers from the male consumers in their twenties indicate that virtual try-on is useful by 88% on offline shopping and by 100% on online shopping. 53% of the workers and 68% of the consumers gave answers that just by virtual try-on they could judge the quality of the apparel products and purchase them. Absolutely 3D virtual try-on is an effective tool for online shoppers. 85% of the workers anticipate applications of the 3D body scanning also in 'body measurement', 'custom pattern development' as well as 'virtual try-on' in the near future. With the positive reactions and the stimulating interests in virtual try-on, the conditions of contemporary world encourage more active researches and wide usages of the technology in apparel industry.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A STUDY ON THE MEASUREMENT OF THE IMPLANT STABILITY USING RESONANCE FREQUENCY ANALYSIS (공진 주파수 분석법에 의한 임플랜트의 안정성 측정에 관한 연구)

  • Park Cheol;Lim Ju-Hwan;Cho In-Ho;Lim Heon-Song
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.41 no.2
    • /
    • pp.182-206
    • /
    • 2003
  • Statement of problem : Successful osseointegration of endosseous threaded implants is dependent on many factors. These may include the surface characteristics and gross geometry of implants, the quality and quantity of bone where implants are placed, and the magnitude and direction of stress in functional occlusion. Therefore clinical quantitative measurement of primary stability at placement and functional state of implant may play a role in prediction of possible clinical symptoms and the renovation of implant geometry, types and surface characteristic according to each patients conditions. Ultimately, it may increase success rate of implants. Purpose : Many available non-invasive techniques used for the clinical measurement of implant stability and osseointegration include percussion, radiography, the $Periotest^{(R)}$, Dental Fine $Tester^{(R)}$ and so on. There is, however, relatively little research undertaken to standardize quantitative measurement of stability of implant and osseointegration due to the various clinical applications performed by each individual operator. Therefore, in order to develop non-invasive experimental method to measure stability of implant quantitatively, the resonance frequency analyzer to measure the natural frequency of specific substance was developed in the procedure of this study. Material & method : To test the stability of the resonance frequency analyzer developed in this study, following methods and materials were used : 1) In-vitro study: the implant was placed in both epoxy resin of which physical properties are similar to the bone stiffness of human and fresh cow rib bone specimen. Then the resonance frequency values of them were measured and analyzed. In an attempt to test the reliability of the data gathered with the resonance frequency analyzer, comparative analysis with the data from the Periotest was conducted. 2) In-vivo study: the implants were inserted into the tibiae of 10 New Zealand rabbits and the resonance frequency value of them with connected abutments at healing time are measured immediately after insertion and gauged every 4 weeks for 16 weeks. Results : Results from these studies were such as follows : The same length implants placed in Hot Melt showed the repetitive resonance frequency values. As the length of abutment increased, the resonance frequency value changed significantly (p<0.01). As the thickness of transducer increased in order of 0.5, 1.0 and 2.0 mm, the resonance frequency value significantly increased (p<0.05). The implants placed in PL-2 and epoxy resin with different exposure degree resulted in the increase of resonance frequency value as the exposure degree of implants and the length of abutment decreased. In comparative experiment based on physical properties, as the thickness of transducer increased, the resonance frequency value increased significantly(p<0.01). As the stiffness of substances where implants were placed increased, and the effective length of implants decreased, the resonance frequencies value increased significantly (p<0.05). In the experiment with cow rib bone specimen, the increase of the length of abutment resulted in significant difference between the results from resonance frequency analyzer and the $Periotest^{(R)}$. There was no difference with significant meaning in the comparison based on the direction of measurement between the resonance frequency value and the $Periotest^{(R)}$ value (p<0.05). In-vivo experiment resulted in repetitive patternes of resonance frequency. As the time elapsed, the resonance frequency value increased significantly with the exception of 4th and 8th week (p<0.05). Conclusion : The development of resonance frequency analyzer is an attempt to standardize the quantitative measurement of stability of implant and osseointegration and compensate for the reliability of data from other non-invasive measuring devices It is considered that further research is needed to improve the efficiency of clinical application of resonance frequency analyzer. In addition, further investigation is warranted on the standardized quantitative analysis of the stability of implant.