• Title/Summary/Keyword: 웹정보시스템

Search Result 5,824, Processing Time 0.03 seconds

Real-time and Parallel Semantic Translation Technique for Large-Scale Streaming Sensor Data in an IoT Environment (사물인터넷 환경에서 대용량 스트리밍 센서데이터의 실시간·병렬 시맨틱 변환 기법)

  • Kwon, SoonHyun;Park, Dongwan;Bang, Hyochan;Park, Youngtack
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.54-67
    • /
    • 2015
  • Nowadays, studies on the fusion of Semantic Web technologies are being carried out to promote the interoperability and value of sensor data in an IoT environment. To accomplish this, the semantic translation of sensor data is essential for convergence with service domain knowledge. The existing semantic translation technique, however, involves translating from static metadata into semantic data(RDF), and cannot properly process real-time and large-scale features in an IoT environment. Therefore, in this paper, we propose a technique for translating large-scale streaming sensor data generated in an IoT environment into semantic data, using real-time and parallel processing. In this technique, we define rules for semantic translation and store them in the semantic repository. The sensor data is translated in real-time with parallel processing using these pre-defined rules and an ontology-based semantic model. To improve the performance, we use the Apache Storm, a real-time big data analysis framework for parallel processing. The proposed technique was subjected to performance testing with the AWS observation data of the Meteorological Administration, which are large-scale streaming sensor data for demonstration purposes.

Establishment and service of user analysis environment related to computational science and engineering simulation platform

  • Kwon, Yejin;Jeon, Inho;On, Noori;Seo, Jerry H.;Lee, Jongsuk R.
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.123-132
    • /
    • 2020
  • The EDucation-research Integration through Simulation On the Net (EDISON) platform, which is a web-based platform that provides computational science and engineering simulation execution environments, can offer various analysis environments to students, general users, as well as computational science and engineering researchers. To expand the user base of the simulation environment services, the EDISON platform holds a challenge every year and attempts to increase the competitiveness and excellence of the platform by analyzing the user requirements of the various simulation environment offered. The challenge platform system in the field of computational science and engineering is provided to users in relation to the simulation service used in the existing EDISON platform. Previously, EDISON challenge servicesoperated independently from simulation services, and hence, services such as end-user review and intermediate simulation results could not be linked. To meet these user requirements, the currently in-service challenge platform for computational science and engineering is linked to the existing computational science and engineering service. In addition, it was possible to increase the efficiency of service resources by providing limited services through various analyses of all users participating in the challenge. In this study, by analyzing the simulation and usage environments of users, we provide an improved challenge platform; we also analyze ways to improve the simulation execution environment.

Analysis of articles on water quality accidents in the water distribution networks using big data topic modelling and sentiment analysis (빅데이터 토픽모델링과 감성분석을 활용한 물공급과정에서의 수질사고 기사 분석)

  • Hong, Sung-Jin;Yoo, Do-Guen
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1235-1249
    • /
    • 2022
  • This study applied the web crawling technique for extracting big data news on water quality accidents in the water supply system and presented the algorithm in a procedural way to obtain accurate water quality accident news. In addition, in the case of a large-scale water quality accident, development patterns such as accident recognition, accident spread, accident response, and accident resolution appear according to the occurrence of an accident. That is, the analysis of the development of water quality accidents through key keywords and sentiment analysis for each stage was carried out in detail based on case studies, and the meanings were analyzed and derived. The proposed methodology was applied to the larval accident period of Incheon Metropolitan City in 2020 and analyzed. As a result, in a situation where the disclosure of information that directly affects consumers, such as water quality accidents, is restricted, the tone of news articles and media reports about water quality accidents with long-term damage in the event of an accident and the degree of consumer pride clearly change over time. could check This suggests the need to prepare consumer-centered policies to increase consumer positivity, although rapid restoration of facilities is very important for the development of water quality accidents from the supplier's point of view.

Economic Feasibility Analysis of Nationwide Expansion of Agro-meteorological Early Warning Service for Weather Risk Management in Korea (농업기상재해 조기경보서비스의 전국 확대에 따른 경제적 타당성 분석)

  • Sangtaek Seo;Yun Hee Jeong;Soo Jin Kim;Kyo-Moon Shim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.3
    • /
    • pp.236-244
    • /
    • 2023
  • The purpose of this study was to examine the economic feasibility of providing services according to the nationwide expansion of early warning services. The net present value method, one of the cost-benefit analysis methods, was applied to the analysis. As a benefit item that constituted the net present value, the damage reduction amount using crop insurance data and the willingness to pay for the use of early warning services were used. The cost items included system construction and maintenance costs, and text transmission costs. As a result of the analysis, it was found that the nationwide expansion of early warning services had economic feasibility, and its economic effect varied depending on the level of text message use (10 % to 40 %, 10 %p interval) of participating farmers. In the future, the economic effect of early warning services is expected to increase further due to the increase in the number of farmers participating in early warning services and the increase in crop damage caused by climate change. It is necessary to further enhance the economic effect of early warning services by actively utilizing information delivery means through apps or the web as well as text messages.

Ontology Design for the Register of Officials(先生案) of the Joseon Period (조선시대 선생안 온톨로지 설계)

  • Kim, Sa-hyun
    • (The)Study of the Eastern Classic
    • /
    • no.69
    • /
    • pp.115-146
    • /
    • 2017
  • This paper is about the research on ontology design for a digital archive of seonsaengan(先生案) of the Joseon Period. Seonsaengan is the register of staff officials at each government office, along with their personal information and records of their transfer from one office to another, in addition to their DOBs, family clan, etc. A total of 176 types of registers are known to be kept at libraries and museums in the country. This paper intends to engage in the ontology design of 47 cases of such registers preserved at the Jangseogak Archives of the Academy of Korean Studies (AKS) with a focus on their content and structure including the names of the relevant government offices and posts assumed by the officials, etc. The work for the ontology design was done with a focus on the officials, the offices they belong to, and records about their transfers kept in the registers. The ontology design categorized relevant resources into classes according to the attributes common to the individuals. Each individual has defined a semantic postposition word that can explicitly express the relationship with other individuals. As for the classes, they were divided into eight categories, i.e. registers, figures, offices, official posts, state examination, records, and concepts. For design of relationships and attributes, terms and phrases such as Dublin Core, Europeana Data Mode, CIDOC-CRM, data model for database of those who passed the exam in the past, which are already designed and used, were referred to. Where terms and phrases designed in existing data models are used, the work used Namespace of the relevant data model. The writer defined the relationships where necessary. The designed ontology shows an exemplary implementation of the Myeongneung seonsaengan(明陵先生案). The work gave consideration to expected effects of information entered when a single registered is expanded to plural registers, along with ways to use it. The ontology design is not one made based on the review of all of the 176 registers. The model needs to be improved each time relevant information is obtained. The aim of such efforts is the systematic arrangement of information contained in the registers. It should be remembered that information arranged in this manner may be rearranged with the aid of databases or archives existing currently or to be built in the future. It is expected that the pieces of information entered through the ontology design will be used as data showing how government offices were operated and what their personnel system was like, along with politics, economy, society, and culture of the Joseon Period, in linkage with databases already established.

Implementation Strategy of Global Framework for Climate Service through Global Initiatives in AgroMeteorology for Agriculture and Food Security Sector (선도적 농림기상 국제협력을 통한 농업과 식량안보분야 전지구기후 서비스체계 구축 전략)

  • Lee, Byong-Lyol;Rossi, Federica;Motha, Raymond;Stefanski, Robert
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.15 no.2
    • /
    • pp.109-117
    • /
    • 2013
  • The Global Framework on Climate Services (GFCS) will guide the development of climate services that link science-based climate information and predictions with climate-risk management and adaptation to climate change. GFCS structure is made up of 5 pillars; Observations/Monitoring (OBS), Research/ Modeling/ Prediction (RES), Climate Services Information System (CSIS) and User Interface Platform (UIP) which are all supplemented with Capacity Development (CD). Corresponding to each GFCS pillar, the Commission for Agricultural Meteorology (CAgM) has been proposing "Global Initiatives in AgroMeteorology" (GIAM) in order to facilitate GFCS implementation scheme from the perspective of AgroMeteorology - Global AgroMeteorological Outlook System (GAMOS) for OBS, Global AgroMeteorological Pilot Projects (GAMPP) for RES, Global Federation of AgroMeteorological Society (GFAMS) for UIP/RES, WAMIS next phase for CSIS/UIP, and Global Centers of Research and Excellence in AgroMeteorology (GCREAM) for CD, through which next generation experts will be brought up as virtuous cycle for human resource procurements. The World AgroMeteorological Information Service (WAMIS) is a dedicated web server in which agrometeorological bulletins and advisories from members are placed. CAgM is about to extend its service into a Grid portal to share computer resources, information and human resources with user communities as a part of GFCS. To facilitate ICT resources sharing, a specialized or dedicated Data Center or Production Center (DCPC) of WMO Information System for WAMIS is under implementation by Korea Meteorological Administration. CAgM will provide land surface information to support LDAS (Land Data Assimilation System) of next generation Earth System as an information provider. The International Society for Agricultural Meteorology (INSAM) is an Internet market place for agrometeorologists. In an effort to strengthen INSAM as UIP for research community in AgroMeteorology, it was proposed by CAgM to establish Global Federation of AgroMeteorological Society (GFAMS). CAgM will try to encourage the next generation agrometeorological experts through Global Center of Excellence in Research and Education in AgroMeteorology (GCREAM) including graduate programmes under the framework of GENRI as a governing hub of Global Initiatives in AgroMeteorology (GIAM of CAgM). It would be coordinated under the framework of GENRI as a governing hub for all global initiatives such as GFAMS, GAMPP, GAPON including WAMIS II, primarily targeting on GFCS implementations.

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Issue tracking and voting rate prediction for 19th Korean president election candidates (댓글 분석을 통한 19대 한국 대선 후보 이슈 파악 및 득표율 예측)

  • Seo, Dae-Ho;Kim, Ji-Ho;Kim, Chang-Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.199-219
    • /
    • 2018
  • With the everyday use of the Internet and the spread of various smart devices, users have been able to communicate in real time and the existing communication style has changed. Due to the change of the information subject by the Internet, data became more massive and caused the very large information called big data. These Big Data are seen as a new opportunity to understand social issues. In particular, text mining explores patterns using unstructured text data to find meaningful information. Since text data exists in various places such as newspaper, book, and web, the amount of data is very diverse and large, so it is suitable for understanding social reality. In recent years, there has been an increasing number of attempts to analyze texts from web such as SNS and blogs where the public can communicate freely. It is recognized as a useful method to grasp public opinion immediately so it can be used for political, social and cultural issue research. Text mining has received much attention in order to investigate the public's reputation for candidates, and to predict the voting rate instead of the polling. This is because many people question the credibility of the survey. Also, People tend to refuse or reveal their real intention when they are asked to respond to the poll. This study collected comments from the largest Internet portal site in Korea and conducted research on the 19th Korean presidential election in 2017. We collected 226,447 comments from April 29, 2017 to May 7, 2017, which includes the prohibition period of public opinion polls just prior to the presidential election day. We analyzed frequencies, associative emotional words, topic emotions, and candidate voting rates. By frequency analysis, we identified the words that are the most important issues per day. Particularly, according to the result of the presidential debate, it was seen that the candidate who became an issue was located at the top of the frequency analysis. By the analysis of associative emotional words, we were able to identify issues most relevant to each candidate. The topic emotion analysis was used to identify each candidate's topic and to express the emotions of the public on the topics. Finally, we estimated the voting rate by combining the volume of comments and sentiment score. By doing above, we explored the issues for each candidate and predicted the voting rate. The analysis showed that news comments is an effective tool for tracking the issue of presidential candidates and for predicting the voting rate. Particularly, this study showed issues per day and quantitative index for sentiment. Also it predicted voting rate for each candidate and precisely matched the ranking of the top five candidates. Each candidate will be able to objectively grasp public opinion and reflect it to the election strategy. Candidates can use positive issues more actively on election strategies, and try to correct negative issues. Particularly, candidates should be aware that they can get severe damage to their reputation if they face a moral problem. Voters can objectively look at issues and public opinion about each candidate and make more informed decisions when voting. If they refer to the results of this study before voting, they will be able to see the opinions of the public from the Big Data, and vote for a candidate with a more objective perspective. If the candidates have a campaign with reference to Big Data Analysis, the public will be more active on the web, recognizing that their wants are being reflected. The way of expressing their political views can be done in various web places. This can contribute to the act of political participation by the people.