• Title/Summary/Keyword: Social Media Uses

Search Result 126, Processing Time 0.025 seconds

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF