• Title/Summary/Keyword: optimal systems

Search Result 6,723, Processing Time 0.041 seconds

Analysis of weighted usable area and estimation of optimum environmental flow based on growth stages of target species for improving fish habitat in regulated and non-regulated rivers (조절 및 비조절 하천의 어류 서식처 개선을 위한 성장 단계별 가중가용면적 분석 및 최적 환경생태유량 산정)

  • Jung, Sanghwa;Ji, Un;Kim, Kyu-ho;Jang, Eun-kyung
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.spc2
    • /
    • pp.811-822
    • /
    • 2019
  • Environmental flows in the downstream sections of Yongdam Dam, Wonju Stream Dam, and Hongcheon River were estimated with selected target fish species such as Nigra for the site of Yongdam Dam, Splendidus for the site of Wonju Stream Dam, and Signifer for the site of Hongcheon River by considering endangered and domestic species. Physical habitat analysis was performed to estimate environmental flows for the study sites by applying the Physical Habitat Simulation (PHABSIM) and RIVER2D which combined hydraulic and habitat models. Based on the monitored data for ecological environment, the Habitat Suitability Index (HSI) for the target species was estimated by applying the Instream Flow and Aquatic Systems Group (IFASG). In particular, based on the result of fish monitoring, the HSI for each stage of the growth for target species was analyzed. As a result, the Weighted Usable Area (WUA) was maximized at $4.9m^3/s$ of flow discharge during spawning, $5.8m^3/s$ during the period of juvenile, and $8.9m^3/s$ during the adult fish season at the downstream section of Yongdam Dam. The result of the Wonju Stream Dam showed an optimal environmental flow of $0.4m^3/s$, $1.0m^3/s$, and $1.5m^3/s$ during the period of spawning, juvenile, and adult. The habitat analysis for the site of Hongcheon River, which is a non-regulated stream, produced an optimum environmental flow of $5m^3/s$ in the spawning period, $4m^3/s$ in the juvenile stage and $6m^3/s$ in the adult stage.

A Study on Differentiation and Improvement in Arbitration Systems in Construction Disputes (건설분쟁 중재제도의 차별화 및 개선방안에 관한 연구)

  • Lee, Sun-Jae
    • Journal of Arbitration Studies
    • /
    • v.29 no.2
    • /
    • pp.239-282
    • /
    • 2019
  • The importance of ADR(Alternative Dispute Resolution), which has the advantage of expertise, speed and neutrality due to the increase of arbitration cases due to domestic and foreign construction disputes, has emerged. Therefore, in order for the nation's arbitration system and the arbitration Organization to jump into the ranks of advanced international mediators, it is necessary to research the characteristics and advantages of these arbitration Organization through a study of prior domestic and foreign research and operation of international arbitration Organization. As a problem, First, education for the efficient promotion of arbitrators (compulsory education, maintenance education, specialized education, seminars, etc.). second, The effectiveness of arbitration in resolving construction disputes (hearing methods, composition of the tribunal, and speed). third, The issue of flexibility and diversity of arbitration solutions (the real problem of methodologies such as mediation and arbitration) needs to be drawn on the Arbitration laws and practical problems, such as laws, rules and guidelines. Therefore, Identify the problems presented in the preceding literature and diagnosis of the defects and problems of the KCAB by drawing features and benefits from the arbitration system operated by the international arbitration Institution. As an improvement, the results of an empirical analysis are derived for "arbitrator" simultaneously through a recognition survey. As a method of improvement, First, as an optimal combination of arbitration hearing and judgment in the settlement of construction disputes,(to improve speed). (1) A plan to improve the composition of the audit department according to the complexity, specificity, and magnification of the arbitration cases - (1)Methods to cope with the increased role of the non-lawyer(Specialist, technical expert). (2)Securing technical mediators for each specialized expert according to the large and special corporation arbitration cases. (2) Improving the method of writing by area of the arbitration guidelines, second, Introduction of the intensive hearing system for psychological efficiency and the institutional improvement plan (1) Problems of optimizing the arbitration decision hearing procedure and resolution of arbitration, and (2) Problems of the management of technical arbitrators of arbitration tribunals. (1)A plan to expand hearing work of technical arbitrator(Review on the introduction of the Assistant System as a member of the arbitration tribunals). (2)Improved use of alternative appraisers by tribunals(cost analysis and utilization of the specialized institution for calculating construction costs), Direct management of technical arbitrators : A Study on the Improvement of the Assessment Reliability of the Appraisal and the Appraisal Period. third, Improvement of expert committee system and new method, (1) Creating a non-executive technical committee : Special technology affairs, etc.(Major, supports pre-qualification of special events and coordinating work between parties). (2) Expanding the standing committee.(Added expert technicians : important, special, large affairs / pre-consultations, pre-coordination and mediation-arbitration). This has been shown to be an improvement. In addition, institutional differentiation to enhance the flexibility and diversity of arbitration. In addition, as an institutional differentiation to enhance the flexibility and diversity of arbitration, First, The options for "Med-Arb", "Arb-Med" and "Arb-Med-Arb" are selected. second, By revising the Agreement Act [Article 28, 2 (Agreement on Dispute Resolution)], which is to be amended by the National Parties, the revision of the arbitration settlement clause under the Act, to expand the method to resolve arbitration. third, 2017.6.28. Measures to strengthen the status role and activities of expert technical arbitrators under enforcement, such as the Act on Promotion of Interestments Industry and the Information of Enforcement Decree. Fourth, a measure to increase the role of expert technical Arbitrators by enacting laws on the promotion of the arbitration industry is needed. Especially, the establishment of the Act on Promotion of Intermediation Industry should be established as an international arbitration agency for the arbitration system. Therefore, it proposes a study of improvement and differentiation measures in the details and a policy, legal and institutional improvement and legislation.

Are you a Machine or Human?: The Effects of Human-likeness on Consumer Anthropomorphism Depending on Construal Level (Are you a Machine or Human?: 소셜 로봇의 인간 유사성과 소비자 해석수준이 의인화에 미치는 영향)

  • Lee, Junsik;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.129-149
    • /
    • 2021
  • Recently, interest in social robots that can socially interact with humans is increasing. Thanks to the development of ICT technology, social robots have become easier to provide personalized services and emotional connection to individuals, and the role of social robots is drawing attention as a means to solve modern social problems and the resulting decline in the quality of individual lives. Along with the interest in social robots, the spread of social robots is also increasing significantly. Many companies are introducing robot products to the market to target various target markets, but so far there is no clear trend leading the market. Accordingly, there are more and more attempts to differentiate robots through the design of social robots. In particular, anthropomorphism has been studied importantly in social robot design, and many approaches have been attempted to anthropomorphize social robots to produce positive effects. However, there is a lack of research that systematically describes the mechanism by which anthropomorphism for social robots is formed. Most of the existing studies have focused on verifying the positive effects of the anthropomorphism of social robots on consumers. In addition, the formation of anthropomorphism of social robots may vary depending on the individual's motivation or temperament, but there are not many studies examining this. A vague understanding of anthropomorphism makes it difficult to derive design optimal points for shaping the anthropomorphism of social robots. The purpose of this study is to verify the mechanism by which the anthropomorphism of social robots is formed. This study confirmed the effect of the human-likeness of social robots(Within-subjects) and the construal level of consumers(Between-subjects) on the formation of anthropomorphism through an experimental study of 3×2 mixed design. Research hypotheses on the mechanism by which anthropomorphism is formed were presented, and the hypotheses were verified by analyzing data from a sample of 206 people. The first hypothesis in this study is that the higher the human-likeness of the robot, the higher the level of anthropomorphism for the robot. Hypothesis 1 was supported by a one-way repeated measures ANOVA and a post hoc test. The second hypothesis in this study is that depending on the construal level of consumers, the effect of human-likeness on the level of anthropomorphism will be different. First, this study predicts that the difference in the level of anthropomorphism as human-likeness increases will be greater under high construal condition than under low construal condition.Second, If the robot has no human-likeness, there will be no difference in the level of anthropomorphism according to the construal level. Thirdly,If the robot has low human-likeness, the low construal level condition will make the robot more anthropomorphic than the high construal level condition. Finally, If the robot has high human-likeness, the high construal levelcondition will make the robot more anthropomorphic than the low construal level condition. We performed two-way repeated measures ANOVA to test these hypotheses, and confirmed that the interaction effect of human-likeness and construal level was significant. Further analysis to specifically confirm interaction effect has also provided results in support of our hypotheses. The analysis shows that the human-likeness of the robot increases the level of anthropomorphism of social robots, and the effect of human-likeness on anthropomorphism varies depending on the construal level of consumers. This study has implications in that it explains the mechanism by which anthropomorphism is formed by considering the human-likeness, which is the design attribute of social robots, and the construal level of consumers, which is the way of thinking of individuals. We expect to use the findings of this study as the basis for design optimization for the formation of anthropomorphism in social robots.

MDP(Markov Decision Process) Model for Prediction of Survivor Behavior based on Topographic Information (지형정보 기반 조난자 행동예측을 위한 마코프 의사결정과정 모형)

  • Jinho Son;Suhwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.101-114
    • /
    • 2023
  • In the wartime, aircraft carrying out a mission to strike the enemy deep in the depth are exposed to the risk of being shoot down. As a key combat force in mordern warfare, it takes a lot of time, effot and national budget to train military flight personnel who operate high-tech weapon systems. Therefore, this study studied the path problem of predicting the route of emergency escape from enemy territory to the target point to avoid obstacles, and through this, the possibility of safe recovery of emergency escape military flight personnel was increased. based problem, transforming the problem into a TSP, VRP, and Dijkstra algorithm, and approaching it with an optimization technique. However, if this problem is approached in a network problem, it is difficult to reflect the dynamic factors and uncertainties of the battlefield environment that military flight personnel in distress will face. So, MDP suitable for modeling dynamic environments was applied and studied. In addition, GIS was used to obtain topographic information data, and in the process of designing the reward structure of MDP, topographic information was reflected in more detail so that the model could be more realistic than previous studies. In this study, value iteration algorithms and deterministic methods were used to derive a path that allows the military flight personnel in distress to move to the shortest distance while making the most of the topographical advantages. In addition, it was intended to add the reality of the model by adding actual topographic information and obstacles that the military flight personnel in distress can meet in the process of escape and escape. Through this, it was possible to predict through which route the military flight personnel would escape and escape in the actual situation. The model presented in this study can be applied to various operational situations through redesign of the reward structure. In actual situations, decision support based on scientific techniques that reflect various factors in predicting the escape route of the military flight personnel in distress and conducting combat search and rescue operations will be possible.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

A Contemplation on Measures to Advance Logistics Centers (물류센터 선진화를 위한 발전 방안에 대한 소고)

  • Sun, Il-Suck;Lee, Won-Dong
    • Journal of Distribution Science
    • /
    • v.9 no.1
    • /
    • pp.17-27
    • /
    • 2011
  • As the world becomes more globalized, business competition becomes fiercer, while consumers' needs for less expensive quality products are on the increase. Business operations make an effort to secure a competitive edge in costs and services, and the logistics industry, that is, the industry operating the storing and transporting of goods, once thought to be an expense, begins to be considered as the third cash cow, a source of new income. Logistics centers are central to storage, loading and unloading of deliveries, packaging operations, and dispensing goods' information. As hubs for various deliveries, they also serve as a core infrastructure to smoothly coordinate manufacturing and selling, using varied information and operation systems. Logistics centers are increasingly on the rise as centers of business supply activities, growing beyond their previous role of primarily storing goods. They are no longer just facilities; they have become logistics strongholds that encompass various features from demand forecast to the regulation of supply, manufacturing, and sales by realizing SCM, taking into account marketability and the operation of service and products. However, despite these changes in logistics operations, some centers have been unable to shed their past roles as warehouses. For the continuous development of logistics centers, various measures would be needed, including a revision of current supporting policies, formulating effective management plans, and establishing systematic standards for founding, managing, and controlling logistics centers. To this end, the research explored previous studies on the use and effectiveness of logistics centers. From a theoretical perspective, an evaluation of the overall introduction, purposes, and transitions in the use of logistics centers found issues to ponder and suggested measures to promote and further advance logistics centers. First, a fact-finding survey to establish demand forecast and standardization is needed. As logistics newspapers predicted that after 2012 supply would exceed demand, causing rents to fall, the business environment for logistics centers has faltered. However, since there is a shortage of fact-finding surveys regarding actual demand for domestic logistic centers, it is hard to predict what the future holds for this industry. Accordingly, the first priority should be to get to the essence of the current market situation by conducting accurate domestic and international fact-finding surveys. Based on those, management and evaluation indicators should be developed to build the foundation for the consistent advancement of logistics centers. Second, many policies for logistics centers should be revised or developed. Above all, a guideline for fair trade between a shipper and a commercial logistics center should be enacted. Since there are no standards for fair trade between them, rampant unfair trades according to market practices have brought chaos to market orders, and now the logistics industry is confronting its own difficulties. Therefore, unfair trade cases that currently plague logistics centers should be gathered by the industry and fair trade guidelines should be established and implemented. In addition, restrictive employment regulations for foreign workers should be eased, and logistics centers should be charged industry rates for the use of electricity. Third, various measures should be taken to improve the management environment. First, we need to find out how to activate value-added logistics. Because the traditional purpose of logistics centers was storage and loading/unloading of goods, their profitability had a limit, and the need arose to find a new angle to create a value added service. Logistic centers have been perceived as support for a company's storage, manufacturing, and sales needs, not as creators of profits. The center's role in the company's economics has been lowering costs. However, as the logistics' management environment spiraled, along with its storage purpose, developing a new feature of profit creation should be a desirable goal, and to achieve that, value added logistics should be promoted. Logistics centers can also be improved through cost estimation. In the meantime, they have achieved some strides in facility development but have still fallen behind in others, particularly in management functioning. Lax management has been rampant because the industry has not developed a concept of cost estimation. The centers have since made an effort toward unification, standardization, and informatization while realizing cost reductions by establishing systems for effective management, but it has been hard to produce profits. Thus, there is an urgent need to estimate costs by determining a basic cost range for each division of work at logistics centers. This undertaking can be the first step to improving the ineffective aspects of how they operate. Ongoing research and constant efforts have been made to improve the level of effectiveness in the manufacturing industry, but studies on resource management in logistics centers are hardly enough. Thus, a plan to calculate the optimal level of resources necessary to operate a logistics center should be developed and implemented in management behavior, for example, by standardizing the hours of operation. If logistics centers, shippers, related trade groups, academic figures, and other experts could launch a committee to work with the government and maintain an ongoing relationship, the constraint and cooperation among members would help lead to coherent development plans for logistics centers. If the government continues its efforts to provide financial support, nurture professional workers, and maintain safety management, we can anticipate the continuous advancement of logistics centers.

  • PDF

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.