• Title/Summary/Keyword: Business Classification

Search Result 1,023, Processing Time 0.028 seconds

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

A Study on the Management of Manhwa Contents Records and Archives (만화기록 관리 방안 연구)

  • Kim, Seon Mi;Kim, Ik Han
    • The Korean Journal of Archival Studies
    • /
    • no.28
    • /
    • pp.35-81
    • /
    • 2011
  • Manhwa is a mass media (to expose all faces of an era such as politics, society, cultures, etc with the methodology of irony, parody, etc). Since the Manhwa records is primary culture infrastructure, it can create the high value-added industry by connecting with fancy, character, game, movie, drama, theme park, advertising business. However, due to lack of active and systematic aquisition system, as precious Manhwa manuscript is being lost every year and the contents hard to preserve such as Manhwa content in the form of electronic records are increasing, the countermeasure of Manhwa contents management is needed desperately. In this study, based on these perceptions, the need of Manhwa records management is examined, and the characteristics and the components of Manhwa records were analyzed. And at the same time, the functions of record management process reflecting the characteristics of Manhwa records were extracted by analyzing various cases of overseas Cartoon Archives. And then, the framework of record-keeping regime was segmented into each of acquisition management service areas and the general Manhwa records archiving strategy, which manages the Manhwa contents records, was established and suggested. The acquired Manhwa content records will secure the context among records and warrant the preservation of records and provide diverse access points by reflecting multi classification and multi-level descriptive element. The Manhwa records completed the intellectual arrangement will be preserved after the conservation in an environment equipped with preservation facilities or preserved using digital format in case of electronic records or when there is potential risk of damaging the records. Since the purpose of the Manhwa records is to use them, the information may be provided to diverse classes of users through the exhibition, the distribution, and the development of archival information content. Since the term of "Manhwa records" is unfamiliar yet and almost no study has been conducted in the perspective of records management, it will be the limit of this study only presenting acquisition strategy, management and service strategy of Manhwa contents and suggesting simple examples. However, if Manhwa records management strategy are possibly introduced practically to Manhwa manuscript repositories through archival approach, it will allow systematic acquisition, preservation, arrangement of Manhwa records and will contribute greatly to form a foundation for future Korean culture contents management.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

A Study on the Structural Reinforcement of the Modified Caisson Floating Dock (개조된 케이슨 플로팅 도크의 구조 보강에 대한 연구)

  • Kim, Hong-Jo;Seo, Kwang-Cheol;Park, Joo-Shin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.1
    • /
    • pp.172-178
    • /
    • 2021
  • In the ship repair market, interest in maintenance and repair is steadily increasing due to the reinforcement of prevention of environmental pollution caused by ships and the reinforcement of safety standards for ship structures. By reflecting this effect, the number of requests for repairs by foreign shipping companies increases to repair shipbuilders in the Southwest Sea. However, because most of the repair shipbuilders in the southwestern area are small and medium-sized companies, it is difficult to lead to the integrated synergy effect of the repair shipbuilding companies. Moreover, the infrastructure is not integrated; hence, using the infrastructure jointly is a challenge, which acts as an obstacle to the activation of the repair shipbuilding industry. Floating docks are indispensable to operating the repair shipbuilding business; in addition, most of them are operated through renovation/repair after importing aging caisson docks from overseas. However, their service life is more than 30 years; additionally, there is no structure inspection standard. Therefore, it is vulnerable to the safety field. In this study, the finite element analysis program of ANSYS was used to evaluate the structural safety of the modified caisson dock and obtain additional structural reinforcement schemes to solve the derived problems. For the floating docks, there are classification regulations; however, concerning structural strength, the regulations are insufficient, and the applicability is inferior. These insufficient evaluation areas were supplemented through a detailed structural FE-analysis. The reinforcement plan was decided by reinforcing the pontoon deck and reinforcement of the side tank, considering the characteristics of the repair shipyard condition. The final plan was selected to reinforce the side wing tank through the structural analysis of the decision; in addition, the actual structure was fabricated to reflect the reinforcement plan. Our results can be used as reference data for improving the structural strength of similar facilities; we believe that the optimal solution can be found quickly if this method is used during renovation/repair.

Characteristics of Environmental Factors and Vegetation Community of Zabelia tyaihyonii (Nakai) Hisauti & H.Hara among the Target Plant Species for Conservation in Baekdudaegan (백두대간 중점보전종인 댕강나무의 식생 군집 및 환경인자 특성)

  • Kim, Ji-Dong;Lee, Hye-Jeong;Lee, Dong-Hyuk;Byeon, Jun Gi;Park, Byeong Joo;Heo, Tae-Im
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.2
    • /
    • pp.201-223
    • /
    • 2022
  • Currently, species extinctions are increasing due to climate change and continued anthropogenic impact. We selected 300 species for conservation with emphasis on plants co-occurring in the Baekdudaegan area, which is a large ecological axis of Korea. We aimed to investigate the vegetation community and environmental characteristics of Zabelia tyaihyonii in the limestone habitat among the target plant species in the Baekdudaegan region to derive effective conservation strategies. In Danyang-gun, Yeongwol-gun, and Jecheon-si, we selected 36 investigation sites where Z. tyaihyonii was present. We investigated the vegetation, flora, soil and physical environment. We also found notable plants such as Thalictrum petaloideum, Sillaphyton podagraria, and Neillia uekii at the investigation sites. We classified forest vegetation community types into 4 vegetation units and 7 species group types. With canonical correspondence analysis (CCA) of the vegetation community and habitat factors, we determined the overall explanatory power to be 75.2%, and we classified the environmental characteristics of the habitat of Z. tyaihyonii into a grouping of three. Among these, we detected a relationship between the environmental factors elevation, slope, organic matter, rock ratio, pH, potassium, and sodium. We identified numerous rare and endemic plants, including Thalictrum petaloideum, in the investigation site, and determined that these groups needed to be preserved at the habitat level. In the classification of the vegetation units analyzed based on the emerging plants and the CCA, we reaffirmed the uniqueness and specificity of the vegetation community in the habitat of Z. tyaihyonii. We anticipate that our results will be used as scientific evidence for the empirical conservation of the native habitats of Z. tyaihyonii.

Digital Archives of Cultural Archetype Contents: Its Problems and Direction (디지털 아카이브즈의 문제점과 방향 - 문화원형 콘텐츠를 중심으로 -)

  • Hahm, Han-Hee;Park, Soon-Cheol
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.17 no.2
    • /
    • pp.23-42
    • /
    • 2006
  • This is a study of the digital archives of Culturecontent.com where 'Cultural Archetype Contents' are currently in service. One of the major purposes of our study is to point out problems in the current system and eventually propose improvements to the digital archives. The government launched a four-year project for developing the cultural archetype content sources and establishing its related business with the hope of enhancing the nation's competitiveness. More specifically, the project focuses on the production of source materials of cultural archetype contents in the subjects of Korea's history. tradition, everyday life. arts and general geographical books. In addition, through this project, the government also intends to establish a proper distribution system of digitalized culture contents and to control copyright issues. This paper analyzes the digital archives system that stores the culture content data that have been produced from 2002 to 2005 and evaluates the current system's weaknesses and strengths. The summary of our findings is as follows. First. the digital archives system does not contain a semantic search engine and therefore its full function is 1agged. Second, similar data is not classified into the same categories but into the different ones, thereby confusing and inconveniencing users. Users who want to find source materials could be disappointed by the current distributive system. Our paper suggests a better system of digital archives with text mining technology which consists of five significant intelligent process-keyword searches, summarization, clustering, classification and topic tracking. Our paper endeavors to develop the best technical environment for preserving and using culture contents data. With the new digitalized upgraded settings, users of culture contents data will discover a world of new knowledge. The technology we introduce in this paper will lead to the highest achievable digital intelligence through a new framework.

Text Mining-Based Emerging Trend Analysis for e-Learning Contents Targeting for CEO (텍스트마이닝을 통한 최고경영자 대상 이러닝 콘텐츠 트렌드 분석)

  • Kyung-Hoon Kim;Myungsin Chae;Byungtae Lee
    • Information Systems Review
    • /
    • v.19 no.2
    • /
    • pp.1-19
    • /
    • 2017
  • Original scripts of e-learning lectures for the CEOs of corporation S were analyzed using topic analysis, which is a text mining method. Twenty-two topics were extracted based on the keywords chosen from five-year records that ranged from 2011 to 2015. Research analysis was then conducted on various issues. Promising topics were selected through evaluation and element analysis of the members of each topic. In management and economics, members demonstrated high satisfaction and interest toward topics in marketing strategy, human resource management, and communication. Philosophy, history of war, and history demonstrated high interest and satisfaction in the field of humanities, whereas mind health showed high interest and satisfaction in the field of in lifestyle. Studies were also conducted to identify topics on the proportion of content, but these studies failed to increase member satisfaction. In the field of IT, educational content responds sensitively to change of the times, but it may not increase the interest and satisfaction of members. The present study found that content production for CEOs should draw out deep implications for value innovation through technology application instead of simply ending the technical aspect of information delivery. Previous studies classified contents superficially based on the name of content program when analyzing the status of content operation. However, text mining can derive deep content and subject classification based on the contents of unstructured data script. This approach can examine current shortages and necessary fields if the service contents of the themes are displayed by year. This study was based on data obtained from influential e-learning companies in Korea. Obtaining practical results was difficult because data were not acquired from portal sites or social networking service. The content of e-learning trends of CEOs were analyzed. Data analysis was also conducted on the intellectual interests of CEOs in each field.

Multi-classification of Osteoporosis Grading Stages Using Abdominal Computed Tomography with Clinical Variables : Application of Deep Learning with a Convolutional Neural Network (멀티 모달리티 데이터 활용을 통한 골다공증 단계 다중 분류 시스템 개발: 합성곱 신경망 기반의 딥러닝 적용)

  • Tae Jun Ha;Hee Sang Kim;Seong Uk Kang;DooHee Lee;Woo Jin Kim;Ki Won Moon;Hyun-Soo Choi;Jeong Hyun Kim;Yoon Kim;So Hyeon Bak;Sang Won Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.187-201
    • /
    • 2024
  • Osteoporosis is a major health issue globally, often remaining undetected until a fracture occurs. To facilitate early detection, deep learning (DL) models were developed to classify osteoporosis using abdominal computed tomography (CT) scans. This study was conducted using retrospectively collected data from 3,012 contrast-enhanced abdominal CT scans. The DL models developed in this study were constructed for using image data, demographic/clinical information, and multi-modality data, respectively. Patients were categorized into the normal, osteopenia, and osteoporosis groups based on their T-scores, obtained from dual-energy X-ray absorptiometry, into normal, osteopenia, and osteoporosis groups. The models showed high accuracy and effectiveness, with the combined data model performing the best, achieving an area under the receiver operating characteristic curve of 0.94 and an accuracy of 0.80. The image-based model also performed well, while the demographic data model had lower accuracy and effectiveness. In addition, the DL model was interpreted by gradient-weighted class activation mapping (Grad-CAM) to highlight clinically relevant features in the images, revealing the femoral neck as a common site for fractures. The study shows that DL can accurately identify osteoporosis stages from clinical data, indicating the potential of abdominal CT scans in early osteoporosis detection and reducing fracture risks with prompt treatment.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.