• Title/Summary/Keyword: 기업성과시스템

Search Result 1,261, Processing Time 0.031 seconds

Understanding Management of Technology(MOT) in South Korea through an Analysis of Graduate MOT Programs' Curricula (한국의 기술경영전문대학원의 교과과정을 통해 본 한국적 기술경영학의 정체성)

  • Taehyun Jung;Gyu Hyun Kwon;Kwon Yeong-il;Hyunkyu Park;Kyootai Lee;Jeonghwan Jeon
    • Journal of Technology Innovation
    • /
    • v.31 no.3
    • /
    • pp.39-73
    • /
    • 2023
  • The field of Management of Technology(MOT) emerged in response to the need for research management within U.S. public research institutions during the 1960s. Since its inception, it has proliferated significantly, being practiced in more than 809 institutions globally and over 19 institutions in Korea, encompassing both research and educational endeavors. Particularly noteworthy is the substantial investment of government resources, primarily channeled through the Ministry of Industry since 2007, which has expeditiously established a comprehensive framework for cultivating graduate-level MOT expertise, marked by both quantitative and qualitative advancements. The educational curriculum in the realm of Korean MOT deviates from foreign counterparts through distinctive pathways, exemplified by its emphasis on industry practice-oriented educational programs, standardization and isomorphism across different schools, as well as its interconnectedness with proximate academic disciplines. This research systematically undertakes an analysis of the curriculum in Korean MOT graduate schools, thereby ascertaining its intrinsic identity and distinct attributes. In this endeavor, a comprehensive examination of eleven principal MOT textbooks(three in Korean and eight in English) is conducted to delineate the primary content of the curriculum across seven thematic domains. Moreover, the study deliberates on its differentiation from neighboring academic disciplines and the definitional attributes of MOT. Subsequently, this analysis also encompasses nine Korean MOT graduate programs, projecting the seven thematic domains onto their respective curricula. The findings illuminate that within the context of Korean graduate programs, a substantial proportion of the curriculum, amounting to 62.5%, is dedicated to facets encompassing the operational aspects of technology management within corporate contexts, technology management specific to varying industries and technologies, and collaborative endeavors between academia and industry in the form of projects and seminars. Evidently, the Korean approach to technology management education is notably geared towards the cultivation of adept practitioners capable of executing technology management functions at a mid-tier managerial level, aligned with the exigencies of regional industries. Grounded in the analysis of technology management curricula, this study extrapolates implications for the future trajectory of MOT education in Korea, encompassing a consideration of the stages of industrial development. It underscores the necessity to augment the educational curricula pertaining conceptual foundation of technology and innovation, strategic perspectives of technology and innovation, and the socio-economic context of technology management.

A Study on the Strategies of Growth in Small & Medium Construction Firms (강원지방 중소건설업의 성장전략에 관한 연구)

  • Kim, Beom-Jin;Cho, Chang-Jin
    • Korean Business Review
    • /
    • v.19 no.1
    • /
    • pp.53-80
    • /
    • 2006
  • This research has been accomplished to build up the growth strategies of Kangwon district's small & medium-sized construction firms. For this purpose, we made an investigation of the present situation and status for existing regional small & medium construction firms by analyzing data. Based on the results from this study, the following growth strategies are suggested to gain their competitive advantages. Firstly, most of all, the role of the top manager is the most important factor since most of the top manager for the small & medium-sized construction firms coincide with the owner the firms. Secondly, the specialization strategy is to establish. Above all they concentrate their business capacities on core business. Then, this growth strategy should be based on the selective escalation of functions in order to maintain an appropriate level of construction works. Thirdly, the specialized skills and skilled workers are ensured for competitive advantages. For human resource development, they should train workers to be multi-functioned on the assumption the they could stay at firm until they wish to retire. Finally, the government must also spare no effort to encourage the small & medium-sized construction firms to build up it's competitive power and cultivate it's spontaneous generation power though the reformation of system related whit the small and medium construction industry.

  • PDF

Are you a Machine or Human?: The Effects of Human-likeness on Consumer Anthropomorphism Depending on Construal Level (Are you a Machine or Human?: 소셜 로봇의 인간 유사성과 소비자 해석수준이 의인화에 미치는 영향)

  • Lee, Junsik;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.129-149
    • /
    • 2021
  • Recently, interest in social robots that can socially interact with humans is increasing. Thanks to the development of ICT technology, social robots have become easier to provide personalized services and emotional connection to individuals, and the role of social robots is drawing attention as a means to solve modern social problems and the resulting decline in the quality of individual lives. Along with the interest in social robots, the spread of social robots is also increasing significantly. Many companies are introducing robot products to the market to target various target markets, but so far there is no clear trend leading the market. Accordingly, there are more and more attempts to differentiate robots through the design of social robots. In particular, anthropomorphism has been studied importantly in social robot design, and many approaches have been attempted to anthropomorphize social robots to produce positive effects. However, there is a lack of research that systematically describes the mechanism by which anthropomorphism for social robots is formed. Most of the existing studies have focused on verifying the positive effects of the anthropomorphism of social robots on consumers. In addition, the formation of anthropomorphism of social robots may vary depending on the individual's motivation or temperament, but there are not many studies examining this. A vague understanding of anthropomorphism makes it difficult to derive design optimal points for shaping the anthropomorphism of social robots. The purpose of this study is to verify the mechanism by which the anthropomorphism of social robots is formed. This study confirmed the effect of the human-likeness of social robots(Within-subjects) and the construal level of consumers(Between-subjects) on the formation of anthropomorphism through an experimental study of 3×2 mixed design. Research hypotheses on the mechanism by which anthropomorphism is formed were presented, and the hypotheses were verified by analyzing data from a sample of 206 people. The first hypothesis in this study is that the higher the human-likeness of the robot, the higher the level of anthropomorphism for the robot. Hypothesis 1 was supported by a one-way repeated measures ANOVA and a post hoc test. The second hypothesis in this study is that depending on the construal level of consumers, the effect of human-likeness on the level of anthropomorphism will be different. First, this study predicts that the difference in the level of anthropomorphism as human-likeness increases will be greater under high construal condition than under low construal condition.Second, If the robot has no human-likeness, there will be no difference in the level of anthropomorphism according to the construal level. Thirdly,If the robot has low human-likeness, the low construal level condition will make the robot more anthropomorphic than the high construal level condition. Finally, If the robot has high human-likeness, the high construal levelcondition will make the robot more anthropomorphic than the low construal level condition. We performed two-way repeated measures ANOVA to test these hypotheses, and confirmed that the interaction effect of human-likeness and construal level was significant. Further analysis to specifically confirm interaction effect has also provided results in support of our hypotheses. The analysis shows that the human-likeness of the robot increases the level of anthropomorphism of social robots, and the effect of human-likeness on anthropomorphism varies depending on the construal level of consumers. This study has implications in that it explains the mechanism by which anthropomorphism is formed by considering the human-likeness, which is the design attribute of social robots, and the construal level of consumers, which is the way of thinking of individuals. We expect to use the findings of this study as the basis for design optimization for the formation of anthropomorphism in social robots.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.