• Title/Summary/Keyword: Attempts

Search Result 6,255, Processing Time 0.037 seconds

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Clinical Study of Corrosive Esophagitis (부식성 식도염에 관한 임상적 고찰)

  • 이원상;정승규;최홍식;김상기;김광문;홍원표
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.6-7
    • /
    • 1981
  • With the improvement of living standard and educational level of the people, there is an increasing awareness about the dangers of toxic substances and lethal drugs. In addition to the above, the governmental control of these substances has led to a progressive decrease in the accidents with corrosive substances. However there are still sporadic incidences of suicidal attempts with the substances due to the unbalance between the cultural development in society and individual emotion. The problem is explained by the fact that there is a variety of corrosive agents easily available to the people due to the considerable industrial development and industrialization. Salzen(1920), Bokey(1924) were pioneers on the subject of the corrosive esophagitis and esophageal stenosis by dilatation method. Since then there had been a continuing improvement on the subject with researches on various acid(Pitkin, 1935, Carmody, 1936) and alkali (Tree, 1942, Tucker, 1951) corrosive agents, and the use of steroid (Spain, 1950) and antibiotics. Recently, early esophagoscopic examination is emphasized on the purpose of determining the way of the treatment in corrosive esophagitis patients. In order to find the effective treatment of such patients in future, the authors selected 96 corrosive esophagitis patients who were admitted and treated at the ENT department of Severance hospital from 1971 to March, 1981 to attempt a clinical study. 1. Sex incidence……male: female=1 : 1.7, Age incidence……21-30 years age group; 38 cases (39.6%). 2. Suicidal attempt……80 cases(83.3%), Accidental ingestion……16 cases (16.7%). Among those who ingested the substance accidentally, children below ten years were most numerous with nine patients. 3. Incidence acetic acid……41 cases(41.8%), lye…20 cases (20.4%), HCI……17 cases (17.3%). There was a trend of rapid rise in the incidence of acidic corrosive agents especially acetic acid. 4. Lavage……57 cases (81.1%). 5. Nasogastric tube insertion……80 cases (83.3%), No insertion……16 cases(16.7%), late admittance……10 cases, failure…4 cases, other……2 cases. 6. Tracheostomy……17 cases(17.7%), respiratory problems(75.0%), mental problems (25.0%). 7. Early endoscopy……11 cases(11.5%), within 48 hours……6 cases (54.4%). Endoscopic results; moderate mucosal ulceration…8 cases (72.7%), mild mucosal erythema……2 cases (18.2%), severe mucosal ulceration……1 cases (9.1%) and among those who took early endoscopic examination; 6 patients were confirmed mild lesion and so they were discharged after endoscopy. Average period of admittance in the cases of nasogastric tube insertion was 4 weeks. 8. Nasogastric tube indwelling period……average 11.6 days, recently our treatment trend in the corrosive esophagitis patients with nasogastric tube indwelling is determined according to the finding of early endoscopy. 9. The No. of patients who didn't given and delayed administration of steroid……7 cases(48.9%): causes; kind of drug(acid, unknown)……12 cases, late admittance……11 cases, mild case…9 cases, contraindication……7 cases, other …8 cases. 10. Management of stricture; bougienage……7 cases, feeding gastrostomy……6 cases, other surgical management……4 cases. 11. Complication……27 cases(28.1%); cardio-pulmonary……10 cases, visceral rupture……8 cases, massive bleeding……6 cases, renal failure……4 cases, other…2 cases, expire and moribund discharge…8 cases. 12. No. of follow-up case……23 cases; esophageal stricture……13 cases and site of stricture; hypopharynx……1 case, mid third of esophagus…5 cases, upper third of esophagus…3 cases, lower third of esophagus……3 cases pylorus……1 case, diffuse esophageal stenosis……1 case.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Surgical Treatment for Isolated Aortic Endocarditis: a Comparison with Isolated Mitral Endocarditis (대동맥 판막만을 침범한 감염성 심내막염의 수술적 치료: 승모판막만을 침범한 경우와 비교 연구)

  • Hong, Seong-Beom;Park, Jeong-Min;Lee, Kyo-Seon;Ryu, Sang-Woo;Yun, Ju-Sik;CheKar, Jay-Key;Yun, Chi-Hyeong;Kim, Sang-Hyung;Ahn, Byoung-Hee
    • Journal of Chest Surgery
    • /
    • v.40 no.9
    • /
    • pp.600-606
    • /
    • 2007
  • Background: Infective endocarditis shows high surgical mortality and morbidity rates, especially for aortic endocarditis. This study attempts to investigate the clinical characteristics and operative results of isolated aortic endocarditis. Material and Method: From July 1990 to May 2005, 25 patients with isolated aortic endocarditis (Group I, male female=18 : 7, mean age $43.2{\pm}18.6$ years) and 23 patients with isolated mitral endocarditis (Group II, male female=10 : 13, mean age $43.2{\pm}17.1$ years) underwent surgical treatment in our hospital. All the patients had native endocarditis and 7 patients showed a bicuspid aortic valve in Group I. Two patients had prosthetic valve endocarditis and one patients developed mitral endocarditis after a mitral valvuloplasty in Group II. Positive blood cultures were obtained from 11 (44.0%) patients in Group I, and 10 (43.3%) patients in Group II, The pre-operative left ventricular ejection fraction for each group was $60.8{\pm}8.7%$ and $62.1{\pm}8.1%$ (p=0.945), respectively. There was moderate to severe aortic regurgitation in 18 patients and vegetations were detected in 17 patients in Group I. There was moderate to severe mitral regurgitation in 19 patients and vegetations were found in 18 patients in Group II. One patient had a ventricular septal defect and another patient underwent a Maze operation with microwaves due to atrial fibrillation. We performed echocardiography before discharge and each year during follow-up. The mean follow-up period was $37.2{\pm}23.5$ (range $9{\sim}123$) months. Result: Postoperative complications included three cases of low cardiac output in Group I and one case each of re-surgery because of bleeding and low cardiac output in Group II. One patient died from an intra-cranial hemorrhage on the first day after surgery in Group I, but there were no early deaths in Group II. The 1, 3-, and 5-year valve related event free rates were 92.0%, 88.0%, and 88.0% for Group I patients, and 91.3%, 76.0%, and 76.0% for Group II patients, respectively. The 1, 3-, and 5-year survival rates were 96.0%, 96.0%, and 96.0% for Group I patients, and foo%, 84.9%, and 84.9% for Group II patients, respectively. Conclusion: Acceptable surgical results and mid-term clinical results for aortic endocarditis were seen.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

An Essay on the Change of Jinju Sword Dance after being designated as an Important Intangible Cultural Asset (<진주검무> 중요무형문화재 지정 이후의 변화에 관한 소고)

  • Lee, Jong Sook
    • Korean Journal of Heritage: History & Science
    • /
    • v.49 no.1
    • /
    • pp.4-21
    • /
    • 2016
  • The purpose of this study is to investigate changes of Jinju Sword Dance, characteristics of the changes, and the current condition of its preservation and succession after the designation as the important intangible cultural property no. 12 in January 16th, 1967. In other words, this study understands the situation which has established the present state of after changes over generations. As of now. the year of 2015, the 3 generation holders have been approved since 1967. In 1967, 8 members of $1^{st}$ generation holders were selected from gisaengs of Gwonbeon. However, the succession training was incomplete due to conflicts among the holders, the deaths of some holders, and economic activities of the individuals. As the need of a pivot for succession training and activities was rising, Seong, Gye-Ok was additionally approved as the $2^{nd}$ generation holder on June $21^{st}$, 1978. Seong, Gye-Ok who had never been a gisaeng had dramatically changed with a lot of new attempts. After the death of Seong, Gye-Ok in 2009, Kim, Tae-Yeon and Yu, Yeong-Hee were approved as the $3^{rd}$ generation holders in February, 2010. Based on the resources including the "Cultural Research Reports of Important Intangible Cultural Properties" in 1966 and videos up to 2014, the changes of the dance and surroundings are as follow. 1. The formation of musical accompaniment has been changed during the 3 generations. In the video of the $1^{st}$ generation(in 1970), the performance lasted about 15 minutes, whereas the performance lasted 25 minutes in the video of the $2^{nd}$ generation. Yumbuldoduri rhythm was considered as Ginyumbul(Sangryeongsan) and played more slowly. The original dance requiring only 15 rhythms was extended to 39 rhythms to provide longer performance time. In the $3^{rd}$ generation, the dance recovered 15 rhythms using the term Ginyumbul. The facts that Yumbul was played for 3 minutes in the $1^{st}$ generation but for 5 minutes in the 3rd generation shows that there was tendency pursuing the slowness from the $2^{nd}$ generation. 2. For the composition of the Dance, the performance included additional 20 rhythms of Ginyumbul and Ah(亞)-shaped formation from the $2^{nd}$ generation. From the $3^{rd}$ generation, the performance excluded the formation which had no traditional base. For the movement of the Dance, the bridge poses of Ggakjittegi and Bangsukdoli have been visibly inflexible. Also, the extention of time value in 1 beat led the Dance less vibrant. 3. At the designation as an important intangible cultural property (in 1967), the swords with rotatable necks were used, whereas the dancers had been using the swords with non-rotatable necks since late 1970s when the $2^{nd}$ generation holder began to used them. The swords in the "Research Reports" (in 1966) was pointy and semilunar, whereas the straight swords are being used currently. The use of the straight swords can be confirmed from the videos after 1970. 4. There is no change in wearing Jeonlib, Jeonbok, and Hansam, whereas the arrangement of Saekdong of Hansam was different from the arrangement shown in the "Research Reports". Also, dancers were considered to begin wearing the navy skirts when the swords with non-rotatable necks began to be used. Those results showed that has been actively changed for 50 years after the designation. The $2^{nd}$ generation holder, Seong, Gye-Ok, was the pivot of the changes. However, , which was already designated as an important intangible cultural property, is considered to be only a victim of the change experiment from the project to restore Gyobang culture in Jinju, and it is a priority to conduct studies with historical legitimacy. First of all, the slowing beat should be emphasized as the main fact to reduce both the liveliness and dynamic beauty of the Dance.

Analysis of shopping website visit types and shopping pattern (쇼핑 웹사이트 탐색 유형과 방문 패턴 분석)

  • Choi, Kyungbin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.85-107
    • /
    • 2019
  • Online consumers browse products belonging to a particular product line or brand for purchase, or simply leave a wide range of navigation without making purchase. The research on the behavior and purchase of online consumers has been steadily progressed, and related services and applications based on behavior data of consumers have been developed in practice. In recent years, customization strategies and recommendation systems of consumers have been utilized due to the development of big data technology, and attempts are being made to optimize users' shopping experience. However, even in such an attempt, it is very unlikely that online consumers will actually be able to visit the website and switch to the purchase stage. This is because online consumers do not just visit the website to purchase products but use and browse the websites differently according to their shopping motives and purposes. Therefore, it is important to analyze various types of visits as well as visits to purchase, which is important for understanding the behaviors of online consumers. In this study, we explored the clustering analysis of session based on click stream data of e-commerce company in order to explain diversity and complexity of search behavior of online consumers and typified search behavior. For the analysis, we converted data points of more than 8 million pages units into visit units' sessions, resulting in a total of over 500,000 website visit sessions. For each visit session, 12 characteristics such as page view, duration, search diversity, and page type concentration were extracted for clustering analysis. Considering the size of the data set, we performed the analysis using the Mini-Batch K-means algorithm, which has advantages in terms of learning speed and efficiency while maintaining the clustering performance similar to that of the clustering algorithm K-means. The most optimized number of clusters was derived from four, and the differences in session unit characteristics and purchasing rates were identified for each cluster. The online consumer visits the website several times and learns about the product and decides the purchase. In order to analyze the purchasing process over several visits of the online consumer, we constructed the visiting sequence data of the consumer based on the navigation patterns in the web site derived clustering analysis. The visit sequence data includes a series of visiting sequences until one purchase is made, and the items constituting one sequence become cluster labels derived from the foregoing. We have separately established a sequence data for consumers who have made purchases and data on visits for consumers who have only explored products without making purchases during the same period of time. And then sequential pattern mining was applied to extract frequent patterns from each sequence data. The minimum support is set to 10%, and frequent patterns consist of a sequence of cluster labels. While there are common derived patterns in both sequence data, there are also frequent patterns derived only from one side of sequence data. We found that the consumers who made purchases through the comparative analysis of the extracted frequent patterns showed the visiting pattern to decide to purchase the product repeatedly while searching for the specific product. The implication of this study is that we analyze the search type of online consumers by using large - scale click stream data and analyze the patterns of them to explain the behavior of purchasing process with data-driven point. Most studies that typology of online consumers have focused on the characteristics of the type and what factors are key in distinguishing that type. In this study, we carried out an analysis to type the behavior of online consumers, and further analyzed what order the types could be organized into one another and become a series of search patterns. In addition, online retailers will be able to try to improve their purchasing conversion through marketing strategies and recommendations for various types of visit and will be able to evaluate the effect of the strategy through changes in consumers' visit patterns.

Expression and Deployment of Folk Taoism(民間道敎) in the late of Chosŏn Dynasty (조선 후기 민간도교의 발현과 전개 - 조선후기 관제신앙, 선음즐교, 무상단 -)

  • Kim, Youn-Gyeong
    • The Journal of Korean Philosophical History
    • /
    • no.35
    • /
    • pp.309-334
    • /
    • 2012
  • This study attempts to study in what form Folk Taoism in the late of $Chos{\breve{o}}n$ Dynasty has existed and discuss the contents and characteristics of ideological aspects forming the foundation of private Taoism. While Guan Yu Belief(關帝信仰) in the late of $Chos{\breve{o}}n$ Dynasty is a folk belief focusing on Guan Yu, Seoneumjeulgyo(善陰?敎) and Musangdan(無相壇) are religious groups with organization. In case of Seoneumjeulgyo(善陰?敎), 'Seoneumjeul' contains perspective of Tian(天觀) of Confucianism but the ascetic practice method is to practice by reciting the name of the Buddha and the targets of a belief are Gwanje, Munchang, Buwoo. This shows the unified phenomenon of Confucianism, Buddhism, Taoism of Folk Taoism in the late of $Chos{\breve{o}}n$ Dynasty. Guan Yu Belief started at the national level led by the royal family of $Chos{\breve{o}}n$ after Japanese Invasion of Korea in 1592 was firmly settled in non-official circles. Guan Yu in the late of $Chos{\breve{o}}n$ Dynasty is expressed as the incarnation of loyalty and filial piety as well as God controlling life, death and fate. As this divine power and empowerment were spreading as scriptures among people, Guan Yu Belief was settled as a target to defeat the evil and invoke a blessing. Seoneumjeulgyo is the religious group that imitated 'Paekryunsa(白蓮社)' of Ming Qing time of China. Seoneumjeulgyo emphasized 'sympathy' with God through chanting. And it expressed writing written in the state of religious ecstasy as 'Binan(飛鸞).' Binan is also called as revelation and means to be revealed from heaven in the state united with God. Seoneumjeulgyo pursued the state united with God through a recitation of a spell and made scriptures written in the state united with God as its central doctrine. Musangdan published and spread Nanseo(鸞書,Book written by the revelation from God) and Seonso(善書) while worshipping Sam Sung Je Kun(三聖帝君). The scriptures of Folk Taoismin the late of $Chos{\breve{o}}n$ Dynasty can be roughly divided into Nanseo(鸞書) and Seonso(善書). Nanseo is a book written by the revelation from God and Seonso is a book to the standards of good deeds and encourage a person to do them such as Taishangganyingbian(太上感應篇) and Gonghwagyuk(功過格). The characteristics of Folk Taoism in the late of $Chos{\breve{o}}n$ Dynasty are as follows. First, a shrine of Guan Yu built for political reasons played a central role of Folk Taoism in the late of $Chos{\breve{o}}n$ Dynasty. Second, specific private Taoist groups such as Temple $Myory{\breve{o}}nsa$ and Musangdan appeared in the late of $Chos{\breve{o}}n$ Dynasty. These are Nandan Taoism(鸞壇道敎) that pursued the unity of God through 'sympathy' with God. Third, private Taoism of $Chos{\breve{o}}n$ was influenced by the unity of Confucianism, Buddhism, Taoism with private Taoism in the Qing Dynasty of China and religious organization form etc. Fourth, the Folk Taoism scriptures of $Chos{\breve{o}}n$ are divided into Nanseo and Seonso and Nanseo directly made in $Chos{\breve{o}}n$ is expected to be the key to reveal the characteristics of Folk Taoism.