• Title/Summary/Keyword: text input

Search Result 355, Processing Time 0.021 seconds

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

A Framework for Digitalizing Handwritten Document using Digital Pen and Handwriting Recognition Technology (디지털펜과 필기체인식 기술을 이용한 수기문서 전자화 프레임워크)

  • Son, Bong-Ki;Kim, Hak-Joon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.3
    • /
    • pp.1417-1426
    • /
    • 2011
  • Business still relies heavily on pen and paper for legal reasons or convenience. The handwritten document is to be converted into digitalized document for IT system to manage and process in real time. Because the previous document digitalization systems convert the handwritten documents into digitalized documents by scanning and post-processing the documents, it is difficult to seamlessly proceed the work process. This paper proposes the LiveForm, a framework for digitalizing handwritten document using digital pen and handwriting recognition technology. To prove the applicability of the proposed LiveForm, we also implement a LiveForm based service in industrial gas distribution process and analyze effects of the system. The LiveForm generates the same digital image as the handwritten document by writing up the paper with absolute coordinates by digital pen and converts the handwriting data to digital text to insert the information into back-end system. The LiveForm based system eliminates scanning for document digitalization and data input with keyboard into back-end system in paper-based information gathering. Therefore, it is possible for the LiveForm to improve work process in various business areas.

Multi-Level Sequence Alignment : An Adaptive Control Method Between Speed and Accuracy for Document Comparison (계산속도 및 정확도의 적응적 제어가 가능한 다단계 문서 비교 시스템)

  • Seo, Jong-Kyu;Tak, Haesung;Cho, Hwan-Gue
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.728-743
    • /
    • 2014
  • Finger printing and sequence alignment are well-known approaches for document similarity comparison. A fingerprinting method is simple and fast, but it can not find particular similar regions. A string alignment method is used for identifying regions of similarity by arranging the sequences of a string. It has an advantage of finding particular similar regions, but it also has a disadvantage of taking more computing time. The Multi-Level Alignment (MLA) is a new method designed for taking the advantages of both methods. The MLA divides input documents into uniform length blocks, and then extracts fingerprints from each block and calculates similarity of block pairs by comparing the fingerprints. A similarity table is created in this process. Finally, sequence alignment is used for specifying longest similar regions in the similarity table. The MLA allows users to change block's size to control proportion of the fingerprint algorithm and the sequence alignment. As a document is divided into several blocks, similar regions are also fragmented into two or more blocks. To solve this fragmentation problem, we proposed a united block method. Experimentally, we show that computing document's similarity with the united block is more accurate than the original MLA method, with minor time loss.

A Design and Implementation of a Content_Based Image Retrieval System using Color Space and Keywords (칼라공간과 키워드를 이용한 내용기반 화상검색 시스템 설계 및 구현)

  • Kim, Cheol-Ueon;Choi, Ki-Ho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.6
    • /
    • pp.1418-1432
    • /
    • 1997
  • Most general content_based image retrieval techniques use color and texture as retrieval indices. In color techniques, color histogram and color pair based color retrieval techniques suffer from a lack of spatial information and text. And This paper describes the design and implementation of content_based image retrieval system using color space and keywords. The preprocessor for image retrieval has used the coordinate system of the existing HSI(Hue, Saturation, Intensity) and preformed to split One image into chromatic region and achromatic region respectively, It is necessary to normalize the size of image for 200*N or N*200 and to convert true colors into 256 color. Two color histograms for background and object are used in order to decide on color selection in the color space. Spatial information is obtained using a maximum entropy discretization. It is possible to choose the class, color, shape, location and size of image by using keyword. An input color is limited by 15 kinds keyword of chromatic and achromatic colors of the Korea Industrial Standards. Image retrieval method is used as the key of retrieval properties in the similarity. The weight values of color space ${\alpha}(%)and\;keyword\;{\beta}(%)$ can be chosen by the user in inputting the query words, controlling the values according to the properties of image_contents. The result of retrieval in the test using extracted feature such as color space and keyword to the query image are lower that those of weight value. In the case of weight value, the average of te measuring parameters shows approximate Precision(0.858), Recall(0.936), RT(1), MT(0). The above results have proved higher retrieval effects than the content_based image retrieval by using color space of keywords.

  • PDF

Implementation of Interactive Media Content Production Framework based on Gesture Recognition (제스처 인식 기반의 인터랙티브 미디어 콘텐츠 제작 프레임워크 구현)

  • Koh, You-jin;Kim, Tae-Won;Kim, Yong-Goo;Choi, Yoo-Joo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.545-559
    • /
    • 2020
  • In this paper, we propose a content creation framework that enables users without programming experience to easily create interactive media content that responds to user gestures. In the proposed framework, users define the gestures they use and the media effects that respond to them by numbers, and link them in a text-based configuration file. In the proposed framework, the interactive media content that responds to the user's gesture is linked with the dynamic projection mapping module to track the user's location and project the media effects onto the user. To reduce the processing speed and memory burden of the gesture recognition, the user's movement is expressed as a gray scale motion history image. We designed a convolutional neural network model for gesture recognition using motion history images as input data. The number of network layers and hyperparameters of the convolutional neural network model were determined through experiments that recognize five gestures, and applied to the proposed framework. In the gesture recognition experiment, we obtained a recognition accuracy of 97.96% and a processing speed of 12.04 FPS. In the experiment connected with the three media effects, we confirmed that the intended media effect was appropriately displayed in real-time according to the user's gesture.

Multi-Modal Controller Usability for Smart TV Control

  • Yu, Jeongil;Kim, Seongmin;Choe, Jaeho;Jung, Eui S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.32 no.6
    • /
    • pp.517-528
    • /
    • 2013
  • Objective: The objective of this study was to suggest a multi-modal controller type for Smart TV Control. Background: Recently, many issues regarding the Smart TV are arising due to the rising complexity of features in a Smart TV. One of the specific issues involves what type of controller must be utilized in order to perform regulated tasks. This study examines the ongoing trend of the controller. Method: The selected participants had experiences with the Smart TV and were 20 to 30 years of age. A pre-survey determined the first independent variable of five tasks(Live TV, Record, Share, Web, App Store). The second independent variable was the type of controllers(Conventional, Mouse, Voice-Based Remote Controllers). The dependent variables were preference, task completion time, and error rate. The experiment consist a series of three experiments. The first experiment utilized a uni-modal Controller for tasks; the second experiment utilized a dual-modal Controller, while the third experiment utilized a triple-modal Controller. Results: The first experiment revealed that the uni-modal Controller (Conventional, Voice Controller) showed the best results for the Live TV task. The second experiment revealed that the dual-modal Controller(Conventional-Voice, Conventional-Mouse combinations) showed the best results for the Share, Web, App Store tasks. The third experiment revealed that the triple-modal Controller among all the level had not effective compared with dual-modal Controller. Conclusion: In order to control simple tasks in a smart TV, our results showed that a uni-modal Controller was more effective than a dual-modal controller. However, the control of complex tasks was better suited to the dual-modal Controller. User preference for a controller differs according the Smart TV functions. For instance, there was a high user preference for the uni-Controller for simple functions while high user preference appeared for Dual-Controllers when the task was complex. Additionally, in accordance with task characteristics, there was a high user preference for the Voice Controller for channel and volume adjustment. Furthermore, there was a high user preference for the Conventional Controller for menu selection. In situations where the user had to input text, the Voice Controller had the highest preference among users while the Mouse Type, Voice Controller had the highest user preference for performing a search or selecting items on the menu. Application: The results of this study may be utilized in the design of a controller which can effectively carry out the various tasks of the Smart TV.

Speech Animation Synthesis based on a Korean Co-articulation Model (한국어 동시조음 모델에 기반한 스피치 애니메이션 생성)

  • Jang, Minjung;Jung, Sunjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.49-59
    • /
    • 2020
  • In this paper, we propose a speech animation synthesis specialized in Korean through a rule-based co-articulation model. Speech animation has been widely used in the cultural industry, such as movies, animations, and games that require natural and realistic motion. Because the technique for audio driven speech animation has been mainly developed for English, however, the animation results for domestic content are often visually very unnatural. For example, dubbing of a voice actor is played with no mouth motion at all or with an unsynchronized looping of simple mouth shapes at best. Although there are language-independent speech animation models, which are not specialized in Korean, they are yet to ensure the quality to be utilized in a domestic content production. Therefore, we propose a natural speech animation synthesis method that reflects the linguistic characteristics of Korean driven by an input audio and text. Reflecting the features that vowels mostly determine the mouth shape in Korean, a coarticulation model separating lips and the tongue has been defined to solve the previous problem of lip distortion and occasional missing of some phoneme characteristics. Our model also reflects the differences in prosodic features for improved dynamics in speech animation. Through user studies, we verify that the proposed model can synthesize natural speech animation.

Improvement of Endoscopic Image using De-Interlacing Technique (De-Interlace 기법을 이용한 내시경 영상의 화질 개선)

  • 신동익;조민수;허수진
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.469-476
    • /
    • 1998
  • In the case of acquisition and displaying medical Images such as ultrasonography and endoscopy on VGA monitor of PC system, image degradation of tear-drop appears through scan conversion. In this study, we compare several methods which can solve this degradation and implement the hardware system that resolves this problem in real-time with PC. It is possible to represent high quality image display and real-time processing and acquisition with specific de-interlacing device and PCI bridge on our hardware system. Image quality is improved remarkably on our hardware system. It is implemented as PC-based system, so acquiring, saving images and describing text comment on those images and PACS networking can be easily implemented.metabolism. All images were spatially normalized to MNI standard PET template and smoothed with 16mm FWHM Gaussian kernel using SPM96. Mean count in cerebral region was normalized. The VOls for 34 cerebral regions were previously defined on the standard template and 17 different counts of mirrored regions to hemispheric midline were extracted from spatially normalized images. A three-layer feed-forward error back-propagation neural network classifier with 7 input nodes and 3 output nodes was used. The network was trained to interpret metabolic patterns and produce identical diagnoses with those of expert viewers. The performance of the neural network was optimized by testing with 5~40 nodes in hidden layer. Randomly selected 40 images from each group were used to train the network and the remainders were used to test the learned network. The optimized neural network gave a maximum agreement rate of 80.3% with expert viewers. It used 20 hidden nodes and was trained for 1508 epochs. Also, neural network gave agreement rates of 75~80% with 10 or 30 nodes in hidden layer. We conclude that artificial neural network performed as well as human experts and could be potentially useful as clinical decision support tool for the localization of epileptogenic zones.

  • PDF