• Title/Summary/Keyword: Data Portal

Search Result 586, Processing Time 0.026 seconds

Correlation between Internet Search Query Data and the Health Insurance Review & Assessment Service Data for Seasonality of Plantar Fasciitis (족저 근막염의 계절성에 대한 인터넷 검색어 데이터와 건강보험심사평가원 자료의 연관성)

  • Hwang, Seok Min;Lee, Geum Ho;Oh, Seung Yeol
    • Journal of Korean Foot and Ankle Society
    • /
    • v.25 no.3
    • /
    • pp.126-132
    • /
    • 2021
  • Purpose: This study examined whether there are seasonal variations in the number of plantar fasciitis cases from the database of the Korean Health Insurance Review & Assessment Service and an internet search of the volume data related to plantar fasciitis and whether there are correlations between variations. Materials and Methods: The number of plantar fasciitis cases per month was acquired from the Korean Health Insurance Review & Assessment Service from January 2016 to December 2019. The monthly internet relative search volumes for the keywords "plantar fasciitis" and "heel pain" were collected during the same period from DataLab, an internet search query trend service provided by the Korean portal website, Naver. Cosinor analysis was performed to confirm the seasonality of the monthly number of cases and relative search volumes, and Pearson and Spearman correlation analysis was conducted to assess the correlation between them. Results: The number of cases with plantar fasciitis and the relative search volume for the keywords "plantar fasciitis" and "heel pain" all showed significant seasonality (p<0.001), with the highest in the summer and the lowest in the winter. The number of cases with plantar fasciitis was correlated significantly with the relative search volumes of the keywords "plantar fasciitis" (r=0.632; p<0.001) and "heel pain" (r=0.791; p<0.001), respectively. Conclusion: Both the number of cases with plantar fasciitis and the internet search data for related keywords showed seasonality, which was the highest in summer. The number of cases showed a significant correlation with the internet search data for the seasonality of plantar fasciitis. Internet big data could be a complementary resource for researching and monitoring plantar fasciitis.

Calculated Damage of Italian Ryegrass in Abnormal Climate Based World Meteorological Organization Approach Using Machine Learning

  • Jae Seong Choi;Ji Yung Kim;Moonju Kim;Kyung Il Sung;Byong Wan Kim
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.43 no.3
    • /
    • pp.190-198
    • /
    • 2023
  • This study was conducted to calculate the damage of Italian ryegrass (IRG) by abnormal climate using machine learning and present the damage through the map. The IRG data collected 1,384. The climate data was collected from the Korea Meteorological Administration Meteorological data open portal.The machine learning model called xDeepFM was used to detect IRG damage. The damage was calculated using climate data from the Automated Synoptic Observing System (95 sites) by machine learning. The calculation of damage was the difference between the Dry matter yield (DMY)normal and DMYabnormal. The normal climate was set as the 40-year of climate data according to the year of IRG data (1986~2020). The level of abnormal climate was set as a multiple of the standard deviation applying the World Meteorological Organization (WMO) standard. The DMYnormal was ranged from 5,678 to 15,188 kg/ha. The damage of IRG differed according to region and level of abnormal climate with abnormal temperature, precipitation, and wind speed from -1,380 to 1,176, -3 to 2,465, and -830 to 962 kg/ha, respectively. The maximum damage was 1,176 kg/ha when the abnormal temperature was -2 level (+1.04℃), 2,465 kg/ha when the abnormal precipitation was all level and 962 kg/ha when the abnormal wind speed was -2 level (+1.60 ㎧). The damage calculated through the WMO method was presented as an map using QGIS. There was some blank area because there was no climate data. In order to calculate the damage of blank area, it would be possible to use the automatic weather system (AWS), which provides data from more sites than the automated synoptic observing system (ASOS).

The US National Ecological Observatory Network and the Global Biodiversity Framework: national research infrastructure with a global reach

  • Katherine M. Thibault;Christine M, Laney;Kelsey M. Yule;Nico M. Franz;Paula M. Mabee
    • Journal of Ecology and Environment
    • /
    • v.47 no.4
    • /
    • pp.219-227
    • /
    • 2023
  • The US National Science Foundation's National Ecological Observatory Network (NEON) is a continental-scale program intended to provide open data, samples, and infrastructure to understand changing ecosystems for a period of 30 years. NEON collects co-located measurements of drivers of environmental change and biological responses, using standardized methods at 81 field sites to systematically sample variability and trends to enable inferences at regional to continental scales. Alongside key atmospheric and environmental variables, NEON measures the biodiversity of many taxa, including microbes, plants, and animals, and collects samples from these organisms for long-term archiving and research use. Here we review the composition and use of NEON resources to date as a whole and specific to biodiversity as an exemplar of the potential of national research infrastructure to contribute to globally relevant outcomes. Since NEON initiated full operations in 2019, NEON has produced, on average, 1.4 M records and over 32 TB of data per year across more than 180 data products, with 85 products that include taxonomic or other organismal information relevant to biodiversity science. NEON has also collected and curated more than 503,000 samples and specimens spanning all taxonomic domains of life, with up to 100,000 more to be added annually. Various metrics of use, including web portal visitation, data download and sample use requests, and scientific publications, reveal substantial interest from the global community in NEON. More than 47,000 unique IP addresses from around the world visit NEON's web portals each month, requesting on average 1.8 TB of data, and over 200 researchers have engaged in sample use requests from the NEON Biorepository. Through its many global partnerships, particularly with the Global Biodiversity Information Facility, NEON resources have been used in more than 900 scientific publications to date, with many using biodiversity data and samples. These outcomes demonstrate that the data and samples provided by NEON, situated in a broader network of national research infrastructures, are critical to scientists, conservation practitioners, and policy makers. They enable effective approaches to meeting global targets, such as those captured in the Kunming-Montreal Global Biodiversity Framework.

Analysis and utilization of emergency big data (구급 빅데이터의 분석과 활용 방안에 관한 연구)

  • Lee, Seong-Yeon;Kwon, Yu-Jin;Lim, Dong-Oh;Kim, Min-Gyu;Park, Hee-Jin;Kwon, Hay-Rhan;Ju, Young-Cheol
    • The Korean Journal of Emergency Medical Services
    • /
    • v.20 no.1
    • /
    • pp.41-55
    • /
    • 2016
  • Emergency statistics for cities and provinces are currently derived using simple results of comparative numerical data, but there is a limit to the ability to analyze and compare deviations relevant to a specific city and province. This study aims to derive various correlations through statistical analysis of emergency and rescue data for Gwangju Metropolitan City and to develop an analytical model that can be applied nationwide. With the new statistical model, further detailed analysis is possible beyond simple evaluation of rescue data, through links to other institutions and analyses using keywords from Internet portal sites and social networks. Second, a system which that can analyze data that are not shared is required. Through this system, a large amount of data can be automatically analysed in real time. Third, the results should flow back for application in various policies. A real-time monitoring and management system should be created for abnormal patterns of disease. In addition, the results should be available to tailor services for individuals, communities, or specific organizations.

Research of Quality Improvement by Factors Analysis Data Quality Problem:Focus on National R&D Information Linking Structure (품질문제 요인분석을 통한 데이터 품질 개선방안 연구:국가R&D정보연계 방식 중심)

  • Shon, Kang-Ryul;Lim, Jong-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.1
    • /
    • pp.1-14
    • /
    • 2009
  • A currently domestic governmental R&D business is early to 100. And this is each managed individually in 15 professional organizations of research and management by characteristics of a business. For this Reason, A redundant investment issue regarding national R&D occurs, and an issue regarding efficiency of R&D investment by insufficiency of systematic R&D research project and result management is continuously raised. Ministry of Education Science and Technology establishing National Science & Technology Information Service(NTIS) in order to solve these issues. NTIS is the national R&D Portal System which can support efficiency of research and development to result utilization in planning of national research and development. In this paper We consider integrated DB constructions and Information Linking of R&D Participants/Projects/Results information in a NTIS system for data quality Improvement. and then We analyze the cause of the data quality problem, and we propose the improvement plan for data quality elevation of NTIS system.

Comparison and Analysis of Dieting Practices Using Big Data from 2010 and 2015 (빅데이터를 통한 2010년과 2015년의 다이어트 실태 비교 및 분석)

  • Jung, Eun-Jin;Chang, Un-Jae
    • Korean Journal of Community Nutrition
    • /
    • v.23 no.2
    • /
    • pp.128-136
    • /
    • 2018
  • Objectives: The purpose of this study was to compare and analyse dieting practices and tendencies in 2010 and 2015 using big data. Methods: Keywords related to diet were collected from the portal site Naver from January 1, 2010 until December 31, 2010 for 2010 data and from January 1, 2015 until December 31, 2015 for 2015 data. Collected data were analyzed by simple frequency analysis, N-gram analysis, keyword network analysis, and seasonality analysis. Results: The results show that exercise had the highest frequency in simple frequency analysis in both years. However, weight reduction in 2010 and diet menu in 2015 appeared most frequently in N-gram analysis. In addition, keyword network analysis was categorized into three groups in 2010 (diet group, exercise group, and commercial weight control group) and four groups in 2015 (diet group, exercise group, commercial program for weight control group, and commercial food for weight control group). Analysis of seasonality showed that subjects' interests in diets increased steadily from February to July, although subjects were most interested in diets in July in both years. Conclusions: In this study, the number of data in 2015 steadily increased compared with 2010, and diet grouping could be further subdivided. In addition, it can be confirmed that a similar pattern appeared over a one-year cycle in 2010 and 2015. Therefore, dietary method is reflected in society, and it changes according to trends.

Design and Implementation of Smart City Data Marketplace based on oneM2M Standard IoT Platform (oneM2M 표준 IoT 플랫폼 기반 스마트시티 데이터 마켓플레이스 설계 및 구현)

  • Jeong, SeungMyeong;Kim, Seong-yun;Lee, In-Song
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.157-166
    • /
    • 2019
  • oneM2M has been adopted to national and global smart city platforms leveraging its benefits, oneM2M platform assures interoperability to devices and services with standard APIs. Existing access control mechanisms in the standard should be extended to easily distribute smart city data. Compared to the as-is standard, this paper proposes a new access control method with minimum human interventions during data distribution between data sellers and buyers. The proposal has been implemented as the new data marketplace APIs to oneM2M platform and used for data marketplace portal interworking. This also has been demonstrated with smart city PoC service.

Issue Analysis on Gas Safety Based on a Distributed Web Crawler Using Amazon Web Services (AWS를 활용한 분산 웹 크롤러 기반 가스 안전 이슈 분석)

  • Kim, Yong-Young;Kim, Yong-Ki;Kim, Dae-Sik;Kim, Mi-Hye
    • Journal of Digital Convergence
    • /
    • v.16 no.12
    • /
    • pp.317-325
    • /
    • 2018
  • With the aim of creating new economic values and strengthening national competitiveness, governments and major private companies around the world are continuing their interest in big data and making bold investments. In order to collect objective data, such as news, securing data integrity and quality should be a prerequisite. For researchers or practitioners who wish to make decisions or trend analyses based on objective and massive data, such as portal news, the problem of using the existing Crawler method is that data collection itself is blocked. In this study, we implemented a method of collecting web data by addressing existing crawler-style problems using the cloud service platform provided by Amazon Web Services (AWS). In addition, we collected 'gas safety' articles and analyzed issues related to gas safety. In order to ensure gas safety, the research confirmed that strategies for gas safety should be established and systematically operated based on five categories: accident/occurrence, prevention, maintenance/management, government/policy and target.

Prediction Model of Real Estate Transaction Price with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.274-283
    • /
    • 2022
  • Korea is facing a number difficulties arising from rising housing prices. As 'housing' takes the lion's share in personal assets, many difficulties are expected to arise from fluctuating housing prices. The purpose of this study is creating housing price prediction model to prevent such risks and induce reasonable real estate purchases. This study made many attempts for understanding real estate instability and creating appropriate housing price prediction model. This study predicted and validated housing prices by using the LSTM technique - a type of Artificial Intelligence deep learning technology. LSTM is a network in which cell state and hidden state are recursively calculated in a structure which added cell state, which is conveyor belt role, to the existing RNN's hidden state. The real sale prices of apartments in autonomous districts ranging from January 2006 to December 2019 were collected through the Ministry of Land, Infrastructure, and Transport's real sale price open system and basic apartment and commercial district information were collected through the Public Data Portal and the Seoul Metropolitan City Data. The collected real sale price data were scaled based on monthly average sale price and a total of 168 data were organized by preprocessing respective data based on address. In order to predict prices, the LSTM implementation process was conducted by setting training period as 29 months (April 2015 to August 2017), validation period as 13 months (September 2017 to September 2018), and test period as 13 months (December 2018 to December 2019) according to time series data set. As a result of this study for predicting 'prices', there have been the following results. Firstly, this study obtained 76 percent of prediction similarity. We tried to design a prediction model of real estate transaction price with the LSTM Model based on AI and Bigdata. The final prediction model was created by collecting time series data, which identified the fact that 76 percent model can be made. This validated that predicting rate of return through the LSTM method can gain reliability.

The Characteristic of Web Map Service Using RIA Technologies (RIA기술을 적용한 웹 지도 서비스의 특징 연구)

  • Kim, Moon-Gie;Koh, June-Hwan
    • Spatial Information Research
    • /
    • v.20 no.2
    • /
    • pp.35-44
    • /
    • 2012
  • Recently, Web map service is actively accomplished in both private companies and public offices. As a platform, it exists variously from desktop to smartphone. The technology being used has a trend to develop continuously. Also, Web map system provides various open API for people who use geospatial service and data mashup. In this paper, RIA technology which is popularly used recently in web map service w ill be applied to introduce the functions different language map services are mostly using. Based on users' feeling about different web browser's speed, test and analysis have been accomplished. The result is that there are different characteristics according to different functions such as JavaScript, Silverlight, Flex. Actual test has been personally carried out on map service in Seoul GIS portal system. The comprehensive conclusion is that Silverlight has more outstanding function compared with other RIA techniques under the test environment.