• Title/Summary/Keyword: Distributed data collection

Search Result 236, Processing Time 0.026 seconds

Extraction of Soil Wetness Information and Application to Distribution-Type Rainfall-Runoff Model Utilizing Satellite Image Data and GIS (위성영상자료와 GIS를 활용한 토양함수정보 추출 및 분포형 강우-유출 모형 적용)

  • Lee, Jin-Duk;Lee, Jung-Sik;Hur, Chan-Hoe;Kim, Suk-Dong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.3
    • /
    • pp.23-32
    • /
    • 2011
  • This research uses a distributed model, Vflo which can devide subwater shed into square grids and interpret diverse topographic elements which are obtained through GIS processing. To use the distributed model, soil wetness information was extracted through Tasseled Cap transformation from LANDSAT 7 $ETM^+$ satellite data and then they were applied to each cell of the test area, unlike previous studies in which have applied average soil condition of river basin uniformly regardless of space-difference in subwater shed. As a resut of the research, it was ascertained the spatial change of soil wetness is suited to the distributed model in a subwater shed. In addition, we derived out a relation between soil wetness of image collection time and 10 days-preceded rainfall and improved the feasibility of weights obtained by the relation equation.

Design of Distributed Hadoop Full Stack Platform for Big Data Collection and Processing (빅데이터 수집 처리를 위한 분산 하둡 풀스택 플랫폼의 설계)

  • Lee, Myeong-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.45-51
    • /
    • 2021
  • In accordance with the rapid non-face-to-face environment and mobile first strategy, the explosive increase and creation of many structured/unstructured data every year demands new decision making and services using big data in all fields. However, there have been few reference cases of using the Hadoop Ecosystem, which uses the rapidly increasing big data every year to collect and load big data into a standard platform that can be applied in a practical environment, and then store and process well-established big data in a relational database. Therefore, in this study, after collecting unstructured data searched by keywords from social network services based on Hadoop 2.0 through three virtual machine servers in the Spring Framework environment, the collected unstructured data is loaded into Hadoop Distributed File System and HBase based on the loaded unstructured data, it was designed and implemented to store standardized big data in a relational database using a morpheme analyzer. In the future, research on clustering and classification and analysis using machine learning using Hive or Mahout for deep data analysis should be continued.

Type of fault-based RFID Management System in the Supply Chain (공급망상에서 RFID 관리 시스템 기반 장애 유형 처리에 대한 연구)

  • Suh, Byong-Yoon;Kim, Hyong-Do;Kang, Kyung-Sik
    • Journal of the Korea Safety Management & Science
    • /
    • v.12 no.3
    • /
    • pp.169-176
    • /
    • 2010
  • Owing to the latest changes in the IT environment and the advancement of RFID (Radio Frequency Identification) technology, the RFID technology has been frequently applied to the field of logistics and distribution. Now it is possible to acquire information in real-time more accurately and promptly as compared to data collection in the past, through the application of the RFID technology. However, in terms of the application of the RFID technology, the range of the field of logistics and distribution is considerably widely distributed. The management system that is able to monitor the RFID system installed in logistics centers and stores distributed in environmentally many regions, in real-time in the center is insufficient. Therefore, this study proposes a management system which is capable of transmitting the report of the occurrence of errors according to the pre-defined error types at the time of the occurrence of errors in the RFID system installed at each strategic foothold, in real-time to SMS and to the integrated monitoring system, and of taking actions for those errors from a remote place by using a mobile device. The purpose of the error management system proposed in this study is to minimize a data loss in the supplying network by quickly coping with errors in the area where the RFID system is installed.

Adaptive Success Rate-based Sensor Relocation for IoT Applications

  • Kim, Moonseong;Lee, Woochan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.9
    • /
    • pp.3120-3137
    • /
    • 2021
  • Small-sized IoT wireless sensing devices can be deployed with small aircraft such as drones, and the deployment of mobile IoT devices can be relocated to suit data collection with efficient relocation algorithms. However, the terrain may not be able to predict its shape. Mobile IoT devices suitable for these terrains are hopping devices that can move with jumps. So far, most hopping sensor relocation studies have made the unrealistic assumption that all hopping devices know the overall state of the entire network and each device's current state. Recent work has proposed the most realistic distributed network environment-based relocation algorithms that do not require sharing all information simultaneously. However, since the shortest path-based algorithm performs communication and movement requests with terminals, it is not suitable for an area where the distribution of obstacles is uneven. The proposed scheme applies a simple Monte Carlo method based on relay nodes selection random variables that reflect the obstacle distribution's characteristics to choose the best relay node as reinforcement learning, not specific relay nodes. Using the relay node selection random variable could significantly reduce the generation of additional messages that occur to select the shortest path. This paper's additional contribution is that the world's first distributed environment-based relocation protocol is proposed reflecting real-world physical devices' characteristics through the OMNeT++ simulator. We also reconstruct the three days-long disaster environment, and performance evaluation has been performed by applying the proposed protocol to the simulated real-world environment.

Offline Shopping During the COVID-19 Pandemic: Between Need and Fear

  • USMAN, Hardius;PROJO, Nucke Widowati Kusumo;CHAIRY, Chairy
    • Fourth Industrial Review
    • /
    • v.2 no.2
    • /
    • pp.25-37
    • /
    • 2022
  • Purpose - The purposes of this research are: (1) Building and testing a research model that integrates Theory of Reasoned Action (TRA) with fear, perceived risk, and health protocols; (2) Examining the impact of compliance with health protocols on consumer behavior when offline shopping. Research design, data, and methodology - The data collection uses the self-administered survey method, and the questionnaire is distributed online. A total of 504 Indonesian population aged 18 years old or more participate in this research. Data are analyzed using factor analysis, multiple regression, and multiple regression with interaction. Result - This study reveals several findings: (1) Attitude and subjective norm have a significant effect on offline shopping behavior; (2) fear has a direct and indirect effect on offline shopping behavior; (3) the effect of perceived risk on the intensity of offline shopping is determined by compliance with health protocols. Conclusion - This paper discusses the direct influence of attitudes and subjective norms on behavior. This research also integrates fear, perceived risk, and health protocol factors in TRA, which may not have been done much, especially in the COVID-19 pandemic context.

Distributed Software Tools Enabling Efficient RFID Data Pre-Processing Using Agent Mobility (에이전트 이동성을 이용한 효율적인 전자태그 데이터 전처리 가능한 분산 소프트웨어 도구)

  • Ahn, Yong-Sun;Ahn, Jin-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.4
    • /
    • pp.608-615
    • /
    • 2009
  • As RFID tag prices have rapidly been declining because of the advance of RFID technology, each tag is attached to an individual item, not a packing box only, for managing the item much more precisely. However, some mechanisms are essential to handle a very large amount of tag data quickly because readers and middlewares processing RFID data have limited hardware resources. In this paper, we design and implement a new mobile agent-based distributed software tools to satisfy this requirement efficiently. These tools provide a convenient environment enabling required data to be pre-processed repeatedly in transit by transferring a mobile agent including its specified data collection policy to numerous mobile readers. This behavior can significantly reduce the elapsed time required for processing huge volumes of tag data at the readers and middlewares with their very high recognition rates compared with the existing one to process the data by fixed readers after having arrived at the destination

  • PDF

Open Platform for Improvement of e-Health Accessibility (의료정보서비스 접근성 향상을 위한 개방형 플랫폼 구축방안)

  • Lee, Hyun-Jik;Kim, Yoon-Ho
    • Journal of Digital Contents Society
    • /
    • v.18 no.7
    • /
    • pp.1341-1346
    • /
    • 2017
  • In this paper, we designed the open service platform based on integrated type of individual customized service and intelligent information technology with individual's complex attributes and requests. First, the data collection phase is proceed quickly and accurately to repeat extraction, transformation and loading. The generated data from extraction-transformation-loading process module is stored in the distributed data system. The data analysis phase is generated a variety of patterns that used the analysis algorithm in the field. The data processing phase is used distributed parallel processing to improve performance. The data providing should operate independently on device-specific management platform. It provides a type of the Open API.

Issue Analysis on Gas Safety Based on a Distributed Web Crawler Using Amazon Web Services (AWS를 활용한 분산 웹 크롤러 기반 가스 안전 이슈 분석)

  • Kim, Yong-Young;Kim, Yong-Ki;Kim, Dae-Sik;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.16 no.12
    • /
    • pp.317-325
    • /
    • 2018
  • With the aim of creating new economic values and strengthening national competitiveness, governments and major private companies around the world are continuing their interest in big data and making bold investments. In order to collect objective data, such as news, securing data integrity and quality should be a prerequisite. For researchers or practitioners who wish to make decisions or trend analyses based on objective and massive data, such as portal news, the problem of using the existing Crawler method is that data collection itself is blocked. In this study, we implemented a method of collecting web data by addressing existing crawler-style problems using the cloud service platform provided by Amazon Web Services (AWS). In addition, we collected 'gas safety' articles and analyzed issues related to gas safety. In order to ensure gas safety, the research confirmed that strategies for gas safety should be established and systematically operated based on five categories: accident/occurrence, prevention, maintenance/management, government/policy and target.

Environment Monitoring System Using RF Sensor (RF 센서를 이용한 해양 환경 관리 시스템)

  • Cha, Jin-Man;Park, Yeoun-Sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.896-898
    • /
    • 2012
  • Recently, many countries are making efforts for the development of ocean resources because the necessity and importance of the ocean resources are increased. Underwater sensor networks have emerged as a very powerful technique for many applications, including monitoring, measurement, surveillance and control and envisioned to enable applications for oceanographic data collection, ocean sampling, environmental and pollution monitoring, offshore exploration, disaster prevention, tsunami and seaquake warning, assisted navigation, distributed tactical surveillance, and mine reconnaissance. The idea of applying sensor networks into underwater environments (i.e., forming underwater sensor networks) has received increasing interests in monitoring aquatic environments for scientific, environmental, commercial, safety, and military reasons. The data obtained by observing around the environment are wireless-transmitted by a radio set with various waves. According to the technical development of the medium set, some parameters restricted in observing the ocean have been gradually developed with the solution of power, distance, and corrosion and watertight by the seawater. The actual matters such as variety of required data, real-time observation, and data transmission, however, have not enough been improved just as various telecommunication systems on the land. In this paper, a wireless management system will be studied through a setup of wireless network available at fishery around the coast, real-time environmental observation with RF sensor, and data collection by a sensing device at the coastal areas.

  • PDF

Design and Implementation of a Search Engine based on Apache Spark (아파치 스파크 기반 검색엔진의 설계 및 구현)

  • Park, Ki-Sung;Choi, Jae-Hyun;Kim, Jong-Bae;Park, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.17-28
    • /
    • 2017
  • Recently, a study on data has been actively conducted because the value of the data has become more useful. Web crawler that is program of data collection recently spotlighted because it can take advantage of the various fields. Web crawler can be defined as a tool to analyze the web pages and collects the URL by traversing the web server in an automated manner. For the treatment of Big-data, distributed Web crawler is widely used which is based on the Hadoop MapReduce. But, it is difficult to use and has constraints on the performance. Apache spark that is the In-memory computing platform is an alternative to MapReduce. The search engine which is one of the main purposes of web crawler displays the information you search by keyword gathered by web crawler. If search engines implement a spark-based web crawler instead of traditional MapReduce-based web crawler, it would be a more rapid data collection.