• Title/Summary/Keyword: neural network.

Search Result 11,766, Processing Time 0.044 seconds

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Target-Aspect-Sentiment Joint Detection with CNN Auxiliary Loss for Aspect-Based Sentiment Analysis (CNN 보조 손실을 이용한 차원 기반 감성 분석)

  • Jeon, Min Jin;Hwang, Ji Won;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.1-22
    • /
    • 2021
  • Aspect Based Sentiment Analysis (ABSA), which analyzes sentiment based on aspects that appear in the text, is drawing attention because it can be used in various business industries. ABSA is a study that analyzes sentiment by aspects for multiple aspects that a text has. It is being studied in various forms depending on the purpose, such as analyzing all targets or just aspects and sentiments. Here, the aspect refers to the property of a target, and the target refers to the text that causes the sentiment. For example, for restaurant reviews, you could set the aspect into food taste, food price, quality of service, mood of the restaurant, etc. Also, if there is a review that says, "The pasta was delicious, but the salad was not," the words "steak" and "salad," which are directly mentioned in the sentence, become the "target." So far, in ABSA, most studies have analyzed sentiment only based on aspects or targets. However, even with the same aspects or targets, sentiment analysis may be inaccurate. Instances would be when aspects or sentiment are divided or when sentiment exists without a target. For example, sentences like, "Pizza and the salad were good, but the steak was disappointing." Although the aspect of this sentence is limited to "food," conflicting sentiments coexist. In addition, in the case of sentences such as "Shrimp was delicious, but the price was extravagant," although the target here is "shrimp," there are opposite sentiments coexisting that are dependent on the aspect. Finally, in sentences like "The food arrived too late and is cold now." there is no target (NULL), but it transmits a negative sentiment toward the aspect "service." Like this, failure to consider both aspects and targets - when sentiment or aspect is divided or when sentiment exists without a target - creates a dual dependency problem. To address this problem, this research analyzes sentiment by considering both aspects and targets (Target-Aspect-Sentiment Detection, hereby TASD). This study detected the limitations of existing research in the field of TASD: local contexts are not fully captured, and the number of epochs and batch size dramatically lowers the F1-score. The current model excels in spotting overall context and relations between each word. However, it struggles with phrases in the local context and is relatively slow when learning. Therefore, this study tries to improve the model's performance. To achieve the objective of this research, we additionally used auxiliary loss in aspect-sentiment classification by constructing CNN(Convolutional Neural Network) layers parallel to existing models. If existing models have analyzed aspect-sentiment through BERT encoding, Pooler, and Linear layers, this research added CNN layer-adaptive average pooling to existing models, and learning was progressed by adding additional loss values for aspect-sentiment to existing loss. In other words, when learning, the auxiliary loss, computed through CNN layers, allowed the local context to be captured more fitted. After learning, the model is designed to do aspect-sentiment analysis through the existing method. To evaluate the performance of this model, two datasets, SemEval-2015 task 12 and SemEval-2016 task 5, were used and the f1-score increased compared to the existing models. When the batch was 8 and epoch was 5, the difference was largest between the F1-score of existing models and this study with 29 and 45, respectively. Even when batch and epoch were adjusted, the F1-scores were higher than the existing models. It can be said that even when the batch and epoch numbers were small, they can be learned effectively compared to the existing models. Therefore, it can be useful in situations where resources are limited. Through this study, aspect-based sentiments can be more accurately analyzed. Through various uses in business, such as development or establishing marketing strategies, both consumers and sellers will be able to make efficient decisions. In addition, it is believed that the model can be fully learned and utilized by small businesses, those that do not have much data, given that they use a pre-training model and recorded a relatively high F1-score even with limited resources.

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

Comparison of Retinal Ganglion Cell Responses to Different Voltage Stimulation Parameters in Normal and rd1 Mouse Retina (정상망막과 변성망막에서 전압자극 파라미터 변화에 따른 망막신경절세포의 반응 비교)

  • Ye, Jang-Hee;Ryu, Sang-Baek;Kim, Kyung-Hwan;Goo, Yong-Sook
    • Progress in Medical Physics
    • /
    • v.21 no.2
    • /
    • pp.209-217
    • /
    • 2010
  • Retinal prostheses are being developed to restore vision for the blind with retinal diseases such as retinitis pigmentosa (RP) or age-related macular degeneration (AMD). Since retinal prostheses depend upon electrical stimulation to control neural activity, optimal stimulation parameters for successful encoding of visual information are one of the most important requirements to enable visual perception. Therefore, in this paper, we focused on retinal ganglion cell (RGC) responses to different voltage stimulation parameters and compared threshold charge densities in normal and rd1 mice. For this purpose, we used in vitro preparation for the retina of normal and rd1 mice on micro-electrode arrays. When the neural network of rd1 mouse retinas is stimulated with voltage-controlled pulses, RGCs in degenerated retina also respond to voltage amplitude or voltage duration modulation as well in wild-type RGCs. But the temporal pattern of RGCs response is very different; in wild-type RGCs, single peak within 100 ms appears while in RGCs in degenerated retina multiple peaks (~4 peaks) with ~10 Hz rhythm within 400 ms appear. The thresholds for electrical activation of RGCs are overall more elevated in rd1 mouse retinas compared to wild-type mouse retinas: The thresholds for activation of RGCs in rd1 mouse retinas were on average two times higher ($70.50{\sim}99.87\;{\mu}C/cm^2$ vs. $37.23{\sim}61.65\;{\mu}C/cm^2$) in the experiment of voltage amplitude modulation and five times higher ($120.5{\sim}170.6\;{\mu}C/cm^2$ vs. $22.69{\sim}37.57\;{\mu}C/cm^2$) in the experiment of voltage duration modulation than those in wild-type mouse retinas. This is compatible with the findings from human studies that the currents required for evoking visual percepts in RP patients is much higher than those needed in healthy individuals. These results will be used as a guideline for optimal stimulation parameters for upcoming Korean-type retinal prosthesis.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Design of Deep Learning-based Tourism Recommendation System Based on Perceived Value and Behavior in Intelligent Cloud Environment (지능형 클라우드 환경에서 지각된 가치 및 행동의도를 적용한 딥러닝 기반의 관광추천시스템 설계)

  • Moon, Seok-Jae;Yoo, Kyoung-Mi
    • Journal of the Korean Applied Science and Technology
    • /
    • v.37 no.3
    • /
    • pp.473-483
    • /
    • 2020
  • This paper proposes a tourism recommendation system in intelligent cloud environment using information of tourist behavior applied with perceived value. This proposed system applied tourist information and empirical analysis information that reflected the perceptual value of tourists in their behavior to the tourism recommendation system using wide and deep learning technology. This proposal system was applied to the tourism recommendation system by collecting and analyzing various tourist information that can be collected and analyzing the values that tourists were usually aware of and the intentions of people's behavior. It provides empirical information by analyzing and mapping the association of tourism information, perceived value and behavior to tourism platforms in various fields that have been used. In addition, the tourism recommendation system using wide and deep learning technology, which can achieve both memorization and generalization in one model by learning linear model components and neural only components together, and the method of pipeline operation was presented. As a result of applying wide and deep learning model, the recommendation system presented in this paper showed that the app subscription rate on the visiting page of the tourism-related app store increased by 3.9% compared to the control group, and the other 1% group applied a model using only the same variables and only the deep side of the neural network structure, resulting in a 1% increase in subscription rate compared to the model using only the deep side. In addition, by measuring the area (AUC) below the receiver operating characteristic curve for the dataset, offline AUC was also derived that the wide-and-deep learning model was somewhat higher, but more influential in online traffic.

Liver cancer Prediction System using Biochip (바이오칩을 이용한 간암진단 예측 시스템)

  • Lee, Hyoung-Keun;Kim, Choong-Won;Lee, Joon;Kim, Sung-Chun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.05a
    • /
    • pp.967-970
    • /
    • 2008
  • The liver cancer in our country cancerous occurrence frequency to be the gastric cancer in the common cancer, to initially at second unique condition or symptom after the case which is slowly advanced without gets condition many the case which will be diagnosed in the liver cancer, most there was not a reasonable treatment method especially and if what kind of its treated and convalescence of the patient non quantity one, the case which will be discovered in early rising the treatment record was considered seriously about under the early detection. The system which it sees with the system for the early detection of the liver cancer reacts the blood of the control group other than the patient who is confirmed as the liver cancer and the liver cancer to the bio chip and bio chip Profiles mechanical studying leads and it is a system which it classifies. 1149 each other it reacted blood samples of the control group other than the liver cancer patient who is composed of the total 50 samples and the liver cancer which is composed of 100 samples to the bio chip which is composed with different oligo from the present paper and it was a data which it makes acquire worker the neural network it led and it analyzes the classification efficiency of the result $92{\sim}96%$ which it was visible.

  • PDF

Development of Expert system for Plant Construction Project Management (플랜트 건설 공사를 위한 사업관리 전문가 시스템의 개발)

  • 김우주;최대우;김정수
    • Journal of Information Technology Application
    • /
    • v.2 no.1
    • /
    • pp.1-24
    • /
    • 2000
  • Project management in the Construction field inherently has more uncertainty and more risks relative to ones from other area. This is the very reason for why project management is recognized as the important task to construction companies. For getting better performance in the project management, we need a system that keeps the consistencies in a automatic or semi-automatic manner through the project management stages like as project definition stage, project planning stage, project design and implementation stage. But since the early stages such as definition and planning stages has many unstructured features and also are dependent to unique expertise or experience of a specific company, we have difficulty providing systematic support for the task of these stages. This kind of problem becomes harder to solve especially in the plant construction domain that is our target domain. Therefore, in this paper, we propose and also implement a systematic approach to resolve the problem mentioned for the early project management stages in the plant construction domain. The results of our approach can be used not only for the purpose of the early project management stages but also can be used automatically as an input to commercial project management tools for the middle project management stages. Because of doing in this way, the construction project can be consistently managed from the definition to implementation stage in a seamless manner. For achieving this purpose, we adopt knowledge based inference, CBR, and neural network as major methodologies and we also applied our approach to two real world cases, power plant and drainage treatment plant cases from a leading construction company in Korea. Since these two application cases showed us very successful results, we can say our approach was validated successfully to the plant construction area. Finally, we believe our approach will contribute to many project management problems from more broader construction area.

  • PDF

The Churchlands' Theory of Representation and the Semantics (처칠랜드의 표상이론과 의미론적 유사성)

  • Park, Je-Youn
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.2
    • /
    • pp.133-164
    • /
    • 2012
  • Paul Churchland(1989) suggests the theory of representation from the results of cognitive biology and connectionist AI studies. According to the theory, our representations of the diverse phenomena in the world can be represented as the positions of phase state spaces with the actions of the neurons or of the assembly of neurons. He insists connectionist AI neural networks can have the semantical category systems to recognize the world. But Fodor and Lepore(1996) don't look the perspective bright. From their points of view, the Churchland's theory of representation stands on the base of Quine's holism, and the network semantics cannot explain how the criteria of semantical content similarity could be possible, and so cannot the theory. This thesis aims to excavate which one is the better between the perspective of the theory and the one of Fodor and Lepore's. From my understandings of state space theory of representation, artificial nets can coordinates the criteria of contents similarity by the learning algorithm. On the basis of these, I can see that Fodor and Lepore's points cannot penetrate the Churchlands' theory. From the view point of the theory, we can see how the future's artificial systems can have the conceptual systems recognizing the world. Therefore we can have the perspectives what cognitive scientists have to focus on.

  • PDF