• Title/Summary/Keyword: social platform

Search Result 810, Processing Time 0.027 seconds

The Diagnosis of Work Connectivity between Local Government Departments -Focused on Busan Metropolitan City IT Project - (지자체 부서 간 업무연계성 진단 -부산광역시 정보화사업을 중심으로 -)

  • JI, Sang-Tae;NAM, Kwang-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.3
    • /
    • pp.176-188
    • /
    • 2018
  • Modern urban problems are increasingly becoming a market mix that can not be solved by the power of a single department and the necessity of establishing a cooperation system based on data communication between departments is increasing. Therefore, this study analyzed Busan metropolitan city's IT projects from 2014 to 2018 in order to understand the utilization and sharing status of departmental data from the viewpoint that cooperation between departments can start from the sharing of data with high common utilization. In addition, based on the results of the FGI(Focus Group Interview) conducted for the officials of the department responsible for the informatization project, we verified the results of data status analysis. At the same time, we figured out the necessity of data link between departments through SNA(Social Network Analysis) and presented data that should be shared first in the future. As a result, most of the information systems currently use limited data only within the department that produced the data. Most of the linked data was concentrated in the information department. Therefore, this study suggested the following solutions. First, in order to prevent overlapping investments caused by the operation of individual departments and share information, it is necessary to build a small platform to tie the departments, which have high connectivity with each other, into small blocks. Second, a local level process is needed to develop data standards as an extension of national standards in order to expand the information to be used in various fields. Third, as another solution, we proposed a system that can integrate various types of information based on address and location information through application of cloud-based GIS platform. The results of this study are expected to contribute to build a cooperation system between departments through expansion of information sharing with cost reduction.

A study of Artificial Intelligence (AI) Speaker's Development Process in Terms of Social Constructivism: Focused on the Products and Periodic Co-revolution Process (인공지능(AI) 스피커에 대한 사회구성 차원의 발달과정 연구: 제품과 시기별 공진화 과정을 중심으로)

  • Cha, Hyeon-ju;Kweon, Sang-hee
    • Journal of Internet Computing and Services
    • /
    • v.22 no.1
    • /
    • pp.109-135
    • /
    • 2021
  • his study classified the development process of artificial intelligence (AI) speakers through analysis of the news text of artificial intelligence (AI) speakers shown in traditional news reports, and identified the characteristics of each product by period. The theoretical background used in the analysis are news frames and topic frames. As analysis methods, topic modeling and semantic network analysis using the LDA method were used. The research method was a content analysis method. From 2014 to 2019, 2710 news related to AI speakers were first collected, and secondly, topic frames were analyzed using Nodexl algorithm. The result of this study is that, first, the trend of topic frames by AI speaker provider type was different according to the characteristics of the four operators (communication service provider, online platform, OS provider, and IT device manufacturer). Specifically, online platform operators (Google, Naver, Amazon, Kakao) appeared as a frame that uses AI speakers as'search or input devices'. On the other hand, telecommunications operators (SKT, KT) showed prominent frames for IPTV, which is the parent company's flagship business, and 'auxiliary device' of the telecommunication business. Furthermore, the frame of "personalization of products and voice service" was remarkable for OS operators (MS, Apple), and the frame for IT device manufacturers (Samsung) was "Internet of Things (IoT) Integrated Intelligence System". The econd, result id that the trend of the topic frame by AI speaker development period (by year) showed a tendency to develop around AI technology in the first phase (2014-2016), and in the second phase (2017-2018), the social relationship between AI technology and users It was related to interaction, and in the third phase (2019), there was a trend of shifting from AI technology-centered to user-centered. As a result of QAP analysis, it was found that news frames by business operator and development period in AI speaker development are socially constituted by determinants of media discourse. The implication of this study was that the evolution of AI speakers was found by the characteristics of the parent company and the process of co-evolution due to interactions between users by business operator and development period. The implications of this study are that the results of this study are important indicators for predicting the future prospects of AI speakers and presenting directions accordingly.

Perception and Appraisal of Urban Park Users Using Text Mining of Google Maps Review - Cases of Seoul Forest, Boramae Park, Olympic Park - (구글맵리뷰 텍스트마이닝을 활용한 공원 이용자의 인식 및 평가 - 서울숲, 보라매공원, 올림픽공원을 대상으로 -)

  • Lee, Ju-Kyung;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.4
    • /
    • pp.15-29
    • /
    • 2021
  • The study aims to grasp the perception and appraisal of urban park users through text analysis. This study used Google review data provided by Google Maps. Google Maps Review is an online review platform that provides information evaluating locations through social media and provides an understanding of locations from the perspective of general reviewers and regional guides who are registered as members of Google Maps. The study determined if the Google Maps Reviews were useful for extracting meaningful information about the user perceptions and appraisals for parks management plans. The study chose three urban parks in Seoul, South Korea; Seoul Forest, Boramae Park, and Olympic Park. Review data for each of these three parks were collected via web crawling using Python. Through text analysis, the keywords and network structure characteristics for each park were analyzed. The text was analyzed, as were park ratings, and the analysis compared the reviews of residents and foreign tourists. The common keywords found in the review comments for the three parks were "walking", "bicycle", "rest" and "picnic" for activities, "family", "child" and "dogs" for accompanying types, and "playground" and "walking trail" for park facilities. Looking at the characteristics of each park, Seoul Forest shows many outdoor activities based on nature, while the lack of parking spaces and congestion on weekends negatively impacted users. Boramae Park has the appearance of a city park, with various facilities providing numerous activities, but reviewers often cited the park's complexity and the negative aspects in terms of dog walking groups. At Olympic Park, large-scale complex facilities and cultural events were frequently mentioned, emphasizing its entertainment functions. Google Maps Review can function as useful data to identify parks' overall users' experiences and general feelings. Compared to data from other social media sites, Google Maps Review's data provides ratings and understanding factors, including user satisfaction and dissatisfaction.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

Operation Measures of Sea Fog Observation Network for Inshore Route Marine Traffic Safety (연안항로 해상교통안전을 위한 해무관측망 운영방안에 관한 연구)

  • Joo-Young Lee;Kuk-Jin Kim;Yeong-Tae Son
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.2
    • /
    • pp.188-196
    • /
    • 2023
  • Among marine accidents caused by bad weather, visibility restrictions caused by sea fog occurrence cause accidents such as ship strand and ship bottom damage, and at the same time involve casualties caused by accidents, which continue to occur every year. In addition, low visibility at sea is emerging as a social problem such as causing considerable inconvenience to islanders in using transportation as passenger ships are collectively delayed and controlled even if there are local differences between regions. Moreover, such measures are becoming more problematic as they cannot objectively quantify them due to regional deviations or different criteria for judging observations from person to person. Currently, the VTS of each port controls the operation of the ship if the visibility distance is less than 1km, and in this case, there is a limit to the evaluation of objective data collection to the extent that the visibility of sea fog depends on the visibility meter or visual observation. The government is building a marine weather signal sign and sea fog observation networks for sea fog detection and prediction as part of solving these obstacles to marine traffic safety, but the system for observing locally occurring sea fog is in a very insufficient practical situation. Accordingly, this paper examines domestic and foreign policy trends to solve social problems caused by low visibility at sea and provides basic data on the need for government support to ensure maritime traffic safety due to sea fog by factually investigating and analyzing social problems. Also, this aims to establish a more stable maritime traffic operation system by blocking marine safety risks that may ultimately arise from sea fog in advance.

Study on Basic Elements for Smart Content through the Market Status-quo (스마트콘텐츠 현황분석을 통한 기본요소 추출)

  • Kim, Gyoung Sun;Park, Joo Young;Kim, Yi Yeon
    • Korea Science and Art Forum
    • /
    • v.21
    • /
    • pp.31-43
    • /
    • 2015
  • Information and Communications Technology (ICT) is one of the technologies which represent the core value of the creative economy. It has served as a vehicle connecting the existing industry and corporate infrastructure, developing existing products and services and creating new products and services. In addition to the ICT, new devices including big data, mobile gadgets and wearable products are gaining a great attention sending an expectation for a new market-pioneering. Further, Internet of Things (IoT) is helping solidify the ICT-based social development connecting human-to-human, human-to-things and things-to-things. This means that the manufacturing-based hardware development needs to be achieved simultaneously with software development through convergence. The essential element the convergence between hardware and software is OS, for which world's leading companies such as Google and Apple have launched an intense development recognizing the importance of software. Against this backdrop, the status-quo of the software market has been examined for the study of the present report (Korea Evaluation Institute of Industrial Technology: Professional Design Technology Development Project). As a result, the software platform-based Google's android and Apple's iOS are dominant in the global market and late comers are trying to enter the market through various pathways by releasing web-based OS and similar OS to provide a new paradigm to the market. The present study is aimed at finding the way to utilize a smart content by which anyone can be a developer based on OS responding to such as social change, newly defining a smart content to be universally utilized and analyzing the market to deal with a rapid market change. The study method, scope and details are as follows: Literature investigation, Analysis on the app market according to a smart classification system, Trend analysis on the current content market, Identification of five common trends through comparison among the universal definition of smart content, the status-quo of application represented in the app market and content market situation. In conclusion, the smart content market is independent but is expected to develop in the form of a single organic body being connected each other. Therefore, the further classification system and development focus should be made in a way to see the area from multiple perspectives including a social point of view in terms of the existing technology, culture, business and consumers.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Mutual P3P Methodology for Privacy Preserving Context-Aware Systems Development (프라이버시 보호 상황인식 시스템 개발을 위한 쌍방향 P3P 방법론)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.145-162
    • /
    • 2008
  • One of the big concerns in e-society is privacy issue. In special, in developing robust ubiquitous smart space and corresponding services, user profile and preference are collected by the service providers. Privacy issue would be more critical in context-aware services simply because most of the context data themselves are private information: user's current location, current schedule, friends nearby and even her/his health data. To realize the potential of ubiquitous smart space, the systems embedded in the space should corporate personal privacy preferences. When the users invoke a set of services, they are asked to allow the service providers or smart space to make use of personal information which is related to privacy concerns. For this reason, the users unhappily provide the personal information or even deny to get served. On the other side, service provider needs personal information as rich as possible with minimal personal information to discern royal and trustworthy customers and those who are not. It would be desirable to enlarge the allowable personal information complying with the service provider's request, whereas minimizing service provider's requiring personal information which is not allowed to be submitted and user's submitting information which is of no value to the service provider. In special, if any personal information required by the service provider is not allowed, service will not be provided to the user. P3P (Platform for Privacy Preferences) has been regarded as one of the promising alternatives to preserve the personal information in the course of electronic transactions. However, P3P mainly focuses on preserving the buyers' personal information. From time to time, the service provider's business data should be protected from the unintended usage from the buyers. Moreover, even though the user's privacy preference could depend on the context happened to the user, legacy P3P does not handle the contextual change of privacy preferences. Hence, the purpose of this paper is to propose a mutual P3P-based negotiation mechanism. To do so, service provider's privacy concern is considered as well as the users'. User's privacy policy on the service provider's information also should be informed to the service providers before the service begins. Second, privacy policy is contextually designed according to the user's current context because the nomadic user's privacy concern structure may be altered contextually. Hence, the methodology includes mutual privacy policy and personalization. Overall framework of the mechanism and new code of ethics is described in section 2. Pervasive platform for mutual P3P considers user type and context field, which involves current activity, location, social context, objects nearby and physical environments. Our mutual P3P includes the privacy preference not only for the buyers but also the sellers, that is, service providers. Negotiation methodology for mutual P3P is proposed in section 3. Based on the fact that privacy concern occurs when there are needs for information access and at the same time those for information hiding. Our mechanism was implemented based on an actual shopping mall to increase the feasibility of the idea proposed in this paper. A shopping service is assumed as a context-aware service, and data groups for the service are enumerated. The privacy policy for each data group is represented as APPEL format. To examine the performance of the example service, in section 4, simulation approach is adopted in this paper. For the simulation, five data elements are considered: $\cdot$ UserID $\cdot$ User preference $\cdot$ Phone number $\cdot$ Home address $\cdot$ Product information $\cdot$ Service profile. For the negotiation, reputation is selected as a strategic value. Then the following cases are compared: $\cdot$ Legacy P3P is considered $\cdot$ Mutual P3P is considered without strategic value $\cdot$ Mutual P3P is considered with strategic value. The simulation results show that mutual P3P outperforms legacy P3P. Moreover, we could conclude that when mutual P3P is considered with strategic value, performance was better than that of mutual P3P is considered without strategic value in terms of service safety.

Satellite Imagery and AI-based Disaster Monitoring and Establishing a Feasible Integrated Near Real-Time Disaster Monitoring System (위성영상-AI 기반 재난모니터링과 실현 가능한 준실시간 통합 재난모니터링 시스템)

  • KIM, Junwoo;KIM, Duk-jin
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.23 no.3
    • /
    • pp.236-251
    • /
    • 2020
  • As remote sensing technologies are evolving, and more satellites are orbited, the demand for using satellite data for disaster monitoring is rapidly increasing. Although natural and social disasters have been monitored using satellite data, constraints on establishing an integrated satellite-based near real-time disaster monitoring system have not been identified yet, and thus a novel framework for establishing such system remains to be presented. This research identifies constraints on establishing satellite data-based near real-time disaster monitoring systems by devising and testing a new conceptual framework of disaster monitoring, and then presents a feasible disaster monitoring system that relies mainly on acquirable satellite data. Implementing near real-time disaster monitoring by satellite remote sensing is constrained by technological and economic factors, and more significantly, it is also limited by interactions between organisations and policy that hamper timely acquiring appropriate satellite data for the purpose, and institutional factors that are related to satellite data analyses. Such constraints could be eased by employing an integrated computing platform, such as Amazon Web Services(AWS), which enables obtaining, storing and analysing satellite data, and by developing a toolkit by which appropriate satellites'sensors that are required for monitoring specific types of disaster, and their orbits, can be analysed. It is anticipated that the findings of this research could be used as meaningful reference when trying to establishing a satellite-based near real-time disaster monitoring system in any country.

A Study on the Possibility of Self-Correction in the Market for Protecting Internet Privacy (인터넷 개인정보보호의 시장자체해결가능성에 대한 연구)

  • Chung, Sukkyun
    • Journal of Digital Convergence
    • /
    • v.10 no.9
    • /
    • pp.27-37
    • /
    • 2012
  • Internet privacy has become a significant issue in recent years in light of the sharp increase in internet-based social and economic activities. The technology which collects, processes and disseminates personal information is improving significantly and the demand for personal information is rising given its inherent value in regard to targeted marketing and customized services. The high value placed on personal information has turned it into a commodity with economic worth which can be transacted in the marketplace. Therefore, it is strongly required to approach the issue of privacy from economic perspective in addition to the prevailing approaches. This article analyzes the behaviors of consumers and firms in gathering personal information, and shielding it from unauthorized access, using a game theory framework in which players strive to do their best under the given conditions. The analysis shows that there exist no market forces which require all firms to respect consumer privacy, and that government intervention in the form of a nudging incentive for information sharing and/or strict regulation is necessary.