Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)
-
- Journal of Intelligence and Information Systems
- /
- v.25 no.2
- /
- pp.141-166
- /
- 2019
Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.
The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.
Brand switching data frequently used in market structure analysis is adequate to analyze non- durable goods, because it can capture competition between specific two brands. But brand switching data sometimes can not be used to analyze goods like automobiles having long term duration because one of main assumptions that consumer preference toward brand attributes is not changed against time can be violated. Therefore a new type of data which can precisely capture competition among durable goods is needed. Another problem of using brand switching data collected from actual purchase behavior is short of explanation why consumers consider different set of brands. Considering above problems, main purpose of this study is to analyze market structure for durable goods with consideration set. The author uses exploratory approach and latent class clustering to identify market structure based on heterogeneous consideration set among consumers. Then the relationship between some factors and consideration set formation is analyzed. Some benefits and two demographic variables - age and income - are selected as factors based on consumer behavior theory. The author analyzed USA automotive market with top 11 brands using exploratory approach and latent class clustering. 2,500 respondents are randomly selected from the total sample and used for analysis. Six models concerning market structure are established to test. Model 1 means non-structured market and model 6 means market structure composed of six sub-markets. It is exploratory approach because any hypothetical market structure is not defined. The result showed that model 1 is insufficient to fit data. It implies that USA automotive market is a structured market. Model 3 with three market structures is significant and identified as the optimal market structure in USA automotive market. Three sub markets are named as USA brands, Asian Brands, and European Brands. And it implies that country of origin effect may exist in USA automotive market. Comparison between modal classification by derived market structures and probabilistic classification by research model was conducted to test how model 3 can correctly classify respondents. The model classify 97% of respondents exactly. The result of this study is different from those of previous research. Previous research used confirmatory approach. Car type and price were chosen as criteria for market structuring and car type-price structure was revealed as the optimal structure for USA automotive market. But this research used exploratory approach without hypothetical market structures. It is not concluded yet which approach is superior. For confirmatory approach, hypothetical market structures should be established exhaustively, because the optimal market structure is selected among hypothetical structures. On the other hand, exploratory approach has a potential problem that validity for derived optimal market structure is somewhat difficult to verify. There also exist market boundary difference between this research and previous research. While previous research analyzed seven car brands, this research analyzed eleven car brands. Both researches seemed to represent entire car market, because cumulative market shares for analyzed brands exceeds 50%. But market boundary difference might affect the different results. Though both researches showed different results, it is obvious that country of origin effect among brands should be considered as important criteria to analyze USA automotive market structure. This research tried to explain heterogeneity of consideration sets among consumers using benefits and two demographic factors, sex and income. Benefit works as a key variable for consumer decision process, and also works as an important criterion in market segmentation. Three factors - trust/safety, image/fun to drive, and economy - are identified among nine benefit related measure. Then the relationship between market structures and independent variables is analyzed using multinomial regression. Independent variables are three benefit factors and two demographic factors. The result showed that all independent variables can be used to explain why there exist different market structures in USA automotive market. For example, a male consumer who perceives all benefits important and has lower income tends to consider domestic brands more than European brands. And the result also showed benefits, sex, and income have an effect to consideration set formation. Though it is generally perceived that a consumer who has higher income is likely to purchase a high priced car, it is notable that American consumers perceived benefits of domestic brands much positive regardless of income. Male consumers especially showed higher loyalty for domestic brands. Managerial implications of this research are as follow. Though implication may be confined to the USA automotive market, the effect of sex on automotive buying behavior should be analyzed. The automotive market is traditionally conceived as male consumers oriented market. But the proportion of female consumers has grown over the years in the automotive market. It is natural outcome that Volvo and Hyundai motors recently developed new cars which are targeted for women market. Secondly, the model used in this research can be applied easier than that of previous researches. Exploratory approach has many advantages except difficulty to apply for practice, because it tends to accompany with complicated model and to require various types of data. The data needed for the model in this research are a few items such as purchased brands, consideration set, some benefits, and some demographic factors and easy to collect from consumers.
HSDPA (High-Speed Downlink Packet Access) is a 3.5-generation asynchronous mobile communications service based on the third generation of W-CDMA. In Korea, it is mainly provided in through videophone service. Because of the diffusion of more powerful and diversified services, along with steep advances in mobile communications technology, consumers demand a wide range of choices. However, because of the variety of technologies, which tend to overflow the market regardless of consumer preferences, consumers feel increasingly confused. Therefore, we should not adopt strategies that focus only on developing new technology on the assumption that new technologies are next-generation projects. Instead, we should understand the process by which consumers accept new forms of technology and devise schemes to lower market entry barriers through strategies that enable developers to understand and provide what consumers really want. In the Technology Acceptance Model (TAM), perceived usefulness and perceived ease of use are suggested as the most important factors affecting the attitudes of people adopting new technologies (Davis, 1989; Taylor and Todd, 1995; Venkatesh, 2000; Lee et al., 2004). Perceived usefulness is the degree to which a person believes that a particular technology will enhance his or her job performance. Perceived ease of use is the degree of subjective belief that using a particular technology will require little physical and mental effort (Davis, 1989; Morris and Dillon, 1997; Venkatesh, 2000). Perceived pleasure and perceived usefulness have been shown to clearly affect attitudes toward accepting technology (Davis et al., 1992). For example, pleasure in online shopping has been shown to positively impact consumers' attitudes toward online sellers (Eighmey and McCord, 1998; Mathwick, 2002; Jarvenpaa and Todd, 1997). The perceived risk of customers is a subjective risk, which is distinguished from an objective probabilistic risk. Perceived risk includes a psychological risk that consumers perceive when they choose brands, stores, and methods of purchase to obtain a particular item. The ability of an enterprise to revolutionize products depends on the effective acquisition of knowledge about new products (Bierly and Chakrabarti, 1996; Rothwell and Dodgson, 1991). Knowledge acquisition is the ability of a company to perceive the value of novelty and technology of the outside (Cohen and Levinthal, 1990), to evaluate the outside technology that has newly appeared (Arora and Gambaradella, 1994), and to predict the future evolution of technology accurately (Cohen and Levinthal, 1990). Consumer innovativeness is the degree to which an individual adopts innovation earlier than others in the social system (Lee, Ahn, and Ha, 2001; Gatignon and Robertson, 1985). That is, it shows how fast and how easily consumers adopt new ideas. Innovativeness is regarded as important because it has a significant effect on whether consumers adopt new products and on how fast they accept new products (Midgley and Dowling, 1978; Foxall, 1988; Hirschman, 1980). We conducted cross-national comparative research using the TAM model, which empirically verified the relationship between the factors that affect attitudes - perceived usefulness, ease of use, perceived pleasure, perceived risk, innovativeness, and perceived level of knowledge management - and attitudes toward HSDPA service. We also verified the relationship between attitudes and usage intention for the purpose of developing more effective methods of management for HSDPA service providers. For this research, 346 questionnaires were distributed among 350 students in the Republic of Korea. Because 26 of the returned questionnaires were inconsistent or had missing data, 320 questionnaires were used in the hypothesis tests. In UK, 192 of the total 200 questionnaires were retrieved, and two incomplete ones were discarded, bringing the total to 190 questionnaires used for statistical analysis. The results of the overall model analysis are as follows: Republic of Korea x2=333.27(p=0.0), NFI=0.88, NNFI=0.88, CFI=0.91, IFI=0.91, RMR=0.054, GFI=0.90, AGFI=0.84, UK x2=176.57(p=0.0), NFI=0.88, NNFI=0.90, CFI=0.93, IFI=0.93, RMR=0.062, GFI=0.90, AGFI=0.84. From the results of the hypothesis tests of Korean consumers about the relationship between factors that affect intention to use HSDPA services and attitudes, we can conclude that perceived usefulness, ease of use, pleasure, a high level of knowledge management, and innovativeness promote positive attitudes toward HSDPA mobile phones. However, ease of use and perceived pleasure did not have a direct effect on intention to use HSDPA service. This may have resulted from the fact that the use of video phones is not necessary for everyday life yet. Moreover, it has been shown that attitudes toward HSDPA video phones are directly correlated with usage intention, which means that perceived usefulness, ease of use, pleasure, a high level of knowledge management, and innovativeness. These relationships form the basis of the intention to buy, contributing to a situation in which consumers decide to choose carefully. A summary of the results of the hypothesis tests of European consumers revealed that perceived usefulness, pleasure, risk, and the level of knowledge management are factors that affect the formation of attitudes, while ease of use and innovativeness do not have an effect on attitudes. In particular, with regard to the effect value, perceived usefulness has the largest effect on attitudes, followed by pleasure and knowledge management. On the contrary, perceived risk has a smaller effect on attitudes. In the Asian model, ease of use and perceived pleasure were found not to have a direct effect on intention to use. However, because attitudes generally affect the intention to use, perceived usefulness, pleasure, risk, and knowledge management may be considered key factors in attitude development from which usage intention arises. In conclusion, perceived usefulness, pleasure, and the level of knowledge management have an effect on attitude formation in both Asian and European consumers, and such attitudes shape these consumers' intention to use. Furthermore, the hypotheses that ease of use and perceived pleasure affect usage intention are rejected. However, ease of use, perceived risk, and innovativeness showed different results. Perceived risk had no effect on attitude formation among Asians, while ease of use and innovativeness had no effect on attitudes among Europeans.
The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70