• Title/Summary/Keyword: Information access

Search Result 10,231, Processing Time 0.039 seconds

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

Mature Market Sub-segmentation and Its Evaluation by the Degree of Homogeneity (동질도 평가를 통한 실버세대 세분군 분류 및 평가)

  • Bae, Jae-ho
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • As the population, buying power, and intensity of self-expression of the elderly generation increase, its importance as a market segment is also growing. Therefore, the mass marketing strategy for the elderly generation must be changed to a micro-marketing strategy based on the results of sub-segmentation that suitably captures the characteristics of this generation. Furthermore, as a customer access strategy is decided by sub-segmentation, proper segmentation is one of the key success factors for micro-marketing. Segments or sub-segments are different from sectors, because segmentation or sub-segmentation for micro-marketing is based on the homogeneity of customer needs. Theoretically, complete segmentation would reveal a single voice. However, it is impossible to achieve complete segmentation because of economic factors, factors that affect effectiveness, etc. To obtain a single voice from a segment, we sometimes need to divide it into many individual cases. In such a case, there would be a many segments to deal with. On the other hand, to maximize market access performance, fewer segments are preferred. In this paper, we use the term "sub-segmentation" instead of "segmentation," because we divide a specific segment into more detailed segments. To sub-segment the elderly generation, this paper takes their lifestyles and life stages into consideration. In order to reflect these aspects, various surveys and several rounds of expert interviews and focused group interviews (FGIs) were performed. Using the results of these qualitative surveys, we can define six sub-segments of the elderly generation. This paper uses five rules to divide the elderly generation. The five rules are (1) mutually exclusive and collectively exhaustive (MECE) sub-segmentation, (2) important life stages, (3) notable lifestyles, (4) minimum number of and easy classifiable sub-segments, and (5) significant difference in voices among the sub-segments. The most critical point for dividing the elderly market is whether children are married. The other points are source of income, gender, and occupation. In this paper, the elderly market is divided into six sub-segments. As mentioned, the number of sub-segments is a very key point for a successful marketing approach. Too many sub-segments would lead to narrow substantiality or lack of actionability. On the other hand, too few sub-segments would have no effects. Therefore, the creation of the optimum number of sub-segments is a critical problem faced by marketers. This paper presents a method of evaluating the fitness of sub-segments that was deduced from the preceding surveys. The presented method uses the degree of homogeneity (DoH) to measure the adequacy of sub-segments. This measure uses quantitative survey questions to calculate adequacy. The ratio of significantly homogeneous questions to the total numbers of survey questions indicates the DoH. A significantly homogeneous question is defined as a question in which one case is selected significantly more often than others. To show whether a case is selected significantly more often than others, we use a hypothesis test. In this case, the null hypothesis (H0) would be that there is no significant difference between the selection of one case and that of the others. Thus, the total number of significantly homogeneous questions is the total number of cases in which the null hypothesis is rejected. To calculate the DoH, we conducted a quantitative survey (total sample size was 400, 60 questions, 4~5 cases for each question). The sample size of the first sub-segment-has no unmarried offspring and earns a living independently-is 113. The sample size of the second sub-segment-has no unmarried offspring and is economically supported by its offspring-is 57. The sample size of the third sub-segment-has unmarried offspring and is employed and male-is 70. The sample size of the fourth sub-segment-has unmarried offspring and is not employed and male-is 45. The sample size of the fifth sub-segment-has unmarried offspring and is female and employed (either the female herself or her husband)-is 63. The sample size of the last sub-segment-has unmarried offspring and is female and not employed (not even the husband)-is 52. Statistically, the sample size of each sub-segment is sufficiently large. Therefore, we use the z-test for testing hypotheses. When the significance level is 0.05, the DoHs of the six sub-segments are 1.00, 0.95, 0.95, 0.87, 0.93, and 1.00, respectively. When the significance level is 0.01, the DoHs of the six sub-segments are 0.95, 0.87, 0.85, 0.80, 0.88, and 0.87, respectively. These results show that the first sub-segment is the most homogeneous category, while the fourth has more variety in terms of its needs. If the sample size is sufficiently large, more segmentation would be better in a given sub-segment. However, as the fourth sub-segment is smaller than the others, more detailed segmentation is not proceeded. A very critical point for a successful micro-marketing strategy is measuring the fit of a sub-segment. However, until now, there have been no robust rules for measuring fit. This paper presents a method of evaluating the fit of sub-segments. This method will be very helpful for deciding the adequacy of sub-segmentation. However, it has some limitations that prevent it from being robust. These limitations include the following: (1) the method is restricted to only quantitative questions; (2) the type of questions that must be involved in calculation pose difficulties; (3) DoH values depend on content formation. Despite these limitations, this paper has presented a useful method for conducting adequate sub-segmentation. We believe that the present method can be applied widely in many areas. Furthermore, the results of the sub-segmentation of the elderly generation can serve as a reference for mature marketing.

  • PDF

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

A Study on the Practical Approach of European Union's Market Access through the Understanding of Tariffs and Non-Tariff Barriers in European Union (EU의 관세 및 비관세 장벽 이해를 통한 EU시장 개척 방안)

  • Jung, Jae-Woo;Lee, Kil-Nam
    • International Commerce and Information Review
    • /
    • v.16 no.4
    • /
    • pp.191-225
    • /
    • 2014
  • Most of all, this paper analyzes the current situation of EU(European Union) and ascertain EU's economic condition in terms of tariff lines and non-tariff barriers. and the purpose of this article is to find out the problems of EU's tariff lines and non-tariff barriers. Next, We suggest some future direction of export promotion from Korea to EU more largely for our companies. First, this paper describes the characteristics and outline of EU. The EU is a politico-economic union of 28 member states that are primarily located in Europe. The EU traces its origins from the European Coal and Steel Community(ECSC) and the European Economic Community(EEC), formed by the Inner Six countries in 1951 and 1958, respectively. After that, The Maastricht Treaty established the European Union under its current name in 1993. The latest major amendment to the constitutional basis of the EU, the Treaty of Lisbon, came into force in 2009. There are a combined population of over 500 million inhabitants and generated a nominal gross domestic product(GDP) of 16.692 trillion US dollars in EU. The results are as follows ; First of all, In terms of tariff lines and customs duties, Our companies have to know precisely EU's real tariff lines and other customs duties, and such as value added tax and exercise tax, corporate tax regulated by EU commission and EU's 28 members. second, our companies have to confirm EU's non-tariff barriers. such as RoHS, WEEE, REACH. These non-tariff barriers could be hindrances or obstacles to trade with foreign companies in other countries. We perceive all companies exporting to EU are related with these Technical Barriers to Trade irrespective of their nationality. So, Our companies fulfill the requirements of EU Commission concerning safety, health, environment etc. Also, Our companies choose market-driven strategy to export more largely than before in the field of marketing and logistics.

  • PDF

A Research on the Regulations and Perception of Interactive Game in Data Broadcasting: Special Emphasis on the TV-Betting Game (데이터방송 인터랙티브 게임 규제 및 이용자 인식에 관한 연구: 승부게임을 중심으로)

  • Byun, Dong-Hyun;Jung, Moon-Ryul;Bae, Hong-Seob
    • Korean journal of communication and information
    • /
    • v.35
    • /
    • pp.250-291
    • /
    • 2006
  • This study examines the regulatory issues and introduction problems of TV-betting data broadcasts in Korea by in-depth interview with a panel group. TV-betting data broadcast services of card games and horse racing games are widely in use in Europe and other parts of the world. In order to carry out the study, a demo program of TV-betting data broadcast in the OCAP(OpenCableTM Application Platform Specification) system environment, which is the data broadcasting standard for digital cable broadcasts in Korea was exposed to the panel group and then they were interviewed after watching and using the program. The results could be summarized as below. First of all, while TV-betting data broadcasts have many elements of entertainment, the respondents thought that it would be difficult to introduce TV-betting in data broadcasts as in overseas countries largely due to social factors. In addition, in order to introduce TV-betting data broadcasts, they suggested that excessive speculativeness must be suppressed through a series of regulatory system devices, such as by guaranteeing credibility of the media based on safe security systems for transactions, scheduling programs with effective time constraints to prevent the games from running too frequently, limiting the betting values, and by prohibiting access to games through set-top boxes of other data broadcast subscribers. The general consensus was that TV-betting could be considered for gradual introduction within the governmental laws and regulations that would minimize its ill effects. Therefore, the government should formulate long-term regulations and policies for data broadcasts. Once the groundwork is laid for safe introduction of TV-betting on data broadcasts within the boundary of laws and regulations, interactive TV games are expected to be introduced in Korea not only for added functionality of entertainment but also for far-ranging development of data broadcast and new media industries.

  • PDF

The Effect of Supply Chain Dynamic Capabilities, Open Innovation and Supply Uncertainty on Supply Chain Performance (공급사슬 동적역량, 개방형 혁신, 공급 불확실성이 공급사슬 성과에 미치는 영향)

  • Lee, Sang-Yeol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.481-491
    • /
    • 2018
  • As the global business environment is dynamic, uncertain, and complex, supply chain management determines the performance of the supply chain in terms of the utilization of resources and capabilities of companies involved in the supply chain. Companies pursuing open innovation gain greater access to the external environment and accumulate knowledge flows and learning experiences, and may generate better business performance from dynamic capabilities. This study analyzed the effects of supply chain dynamic capabilities, open innovation, and supply uncertainty on supply chain performance. Through questionnaires on 178 companies listed on KOSDAQ, empirical results are as follows: First, integration and reactivity capabilities among supply chain dynamic capabilities have a positive effect on supply chain performance. Second, the moderating effect of open innovation showed a negative correlation in the case of information exchange, and a positive correlation in the cases of integration, cooperation and reactivity. Third, two of the 3-way interaction terms, "information exchange*open innovation*supply uncertainty" and "integration*open innovation*supply uncertainty" were statistically significant. The implications of this study are as follows: First, as the supply chain needs to achieve optimization of the whole process between supply chain components rather than individual companies, dynamic capabilities play an important role in improving performance. Second, for KOSDAQ companies featuring limited capital resources, open innovation that integrates external knowledge is valuable. In order to increase synergistic effects, it is necessary to develop dynamic capabilities accordingly. Third, since resources are constrained, managers must determine the type or level of capabilities and open innovation in accordance with supply uncertainty. Since this study has limitations in analyzing survey data, it is necessary to collect secondary data or longitudinal data. It is also necessary to further analyze the internal and external factors that have a significant impact on supply chain performance.

An Exploratory Study on the Media Experience of Village Community Media Producers Focusing on the Production, Tasks and Policy Implications of Community Media in Jeju (마을공동체미디어 생산자의 미디어 경험에 관한 탐색적 연구 제주지역 공동체미디어의 생산과 과제, 정책적 함의를 중심으로)

  • Jung, Yong Bok
    • Korean journal of communication and information
    • /
    • v.81
    • /
    • pp.153-186
    • /
    • 2017
  • The purpose of this study was to identify the characteristics of village community media in Jeju by looking at the value that it's participants have experienced in the production process. Therefore, this study focused on the creation and production process of village community media, the specific value reflected in this process as well as how to activate and operate it sustainably through in-depth interviews with 12 media participants in Jeju community. As a result of the analysis, firstly, we were able to see that the migrants who are not the indigenous became the center of village community media creation in Jeju and they felt very personal 'fun', 'enthusiasm' and 'satisfaction'. It was also completely open to access and participate in village community media and its contents were filled with stories of everyday life of village residents and hidden stories of old people in the village that were not recorded. The characteristic of the production process of village community media was the horizontal communication and it reflected well the opinions of individual media participants even if it had a joint meeting. Second, as a result of examining the values applied to the production process by village community media participants, they regarded the connection of communication by voluntary participation and restoration of communities through activation of communication in functionalism as an important value. Finally, as a result of examining the challenges and development plans for sustainable management of community media in Jeju, it was required the active participation of village residents, ensuring space for village community media, providing insufficient broadcasting equipment, and the budget support from local governments, etc. It was once again confirmed that the provision of a support system for the stable activities of local governments is an urgent task for sustainable village community media.

  • PDF

Measuring the Third-Person Effects of Public Opinion Polls: Focusing On Online Polls (여론조사보도에 대한 제3자효과 검증: 온라인 여론조사를 주목하며)

  • Kim, Sung-Tae;Willnat, Las;Weaver, David
    • Korean journal of communication and information
    • /
    • v.32
    • /
    • pp.49-73
    • /
    • 2006
  • During the past decades, public opinion polls have become an ubiquitous tool for probing the complexity of people's beliefs and attitudes on a wide variety of issues. Especially since the late 1970s, the use of polls by news organizations has increased dramatically. Along with the proliferation of traditional polls, in the past few years pollsters and news organizations have come to recognize the advantages of online polls. Increasingly there has been more effort to take the pulse of the public through the Internet. With the Internet's rapid growth during the past years, advocates of online polling often emphasize the relative advantages over traditional polls. Researchers from Harris Black International Ltd., for example, argue that "Internet polling is less expensive and faster and offers higher response rates than telephone surveys." Moreover, since many of the newer online polls draw respondents from large databases of registered Internet users, results of online polls have become more balanced. A series of Harris Black online polls conducted during the 1998 gubernatorial and senatorial elections, for example, has accurately projected the winners in 21 of the 22 races it tracked. Many researchers, however, severely criticize online polls for not being representative of the larger population. Despite the often enormous number of participants, Internet users who participate in online polls tend to be younger, better educated and more affluent than the general population. As Traugott pointed out, the people polled in Internet surveys are a "self selected" group, and thus "have volunteered to be part of the test sample, which could mean they are more comfortable with technology, more informed about news and events ... than Americans who aren't online." The fact that users of online polls are self selected and demographically very different from Americans who have no access to the Internet is likely to influence the estimates of what the majority of people think about social or political issues. One of the goals of this study is therefore to analyze whether people perceive traditional and online public opinion polls differently. While most people might not differentiate sufficiently between traditional random sample polls and non representative online polls, some audiences might perceive online polls as more useful and representative. Since most online polls allow some form of direct participation, mostly in the form of an instant vote by mouse click, and often present their findings based on huge numbers of respondents, consumers of these polls might perceive them as more accurate, representative or reliable than traditional random sample polls. If that is true, perceptions of public opinion in society could be significantly distorted for those who rely on or participate in online polls. In addition to investigating how people perceive random sample and online polls, this study focuses on the perceived impact of public opinion polls. Similar to these past studies, which focused on how public opinion polls can influence the perception of mass opinion, this study will analyze how people perceive the effects of polls on themselves and other people. This interest springs from prior studies of the "third person effect," which have found that people often tend to perceive that persuasive communications exert a stronger influence on others than on themselves. While most studies concerned with the political effects of public opinion polls show that exit polls and early reporting of election returns have only weak or no effects on the outcome of election campaigns, some empirical findings suggest that exposure to polls can move people's opinions both toward and away from perceived majority opinion. Thus, if people indeed believe that polls influence others more than themselves, perceptions of majority opinion could be significantly altered because people might anticipate that others will react more strongly to poll results.

  • PDF