• Title/Summary/Keyword: 시간오차

Search Result 2,931, Processing Time 0.031 seconds

Monitoring of a Time-series of Land Subsidence in Mexico City Using Space-based Synthetic Aperture Radar Observations (인공위성 영상레이더를 이용한 멕시코시티 시계열 지반침하 관측)

  • Ju, Jeongheon;Hong, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1657-1667
    • /
    • 2021
  • Anthropogenic activities and natural processes have been causes of land subsidence which is sudden sinking or gradual settlement of the earth's solid surface. Mexico City, the capital of Mexico, is one of the most severe land subsidence areas which are resulted from excessive groundwater extraction. Because groundwater is the primary water resource occupies almost 70% of total water usage in the city. Traditional terrestrial observations like the Global Navigation Satellite System (GNSS) or leveling survey have been preferred to measure land subsidence accurately. Although the GNSS observations have highly accurate information of the surfaces' displacement with a very high temporal resolution, it has often been limited due to its sparse spatial resolution and highly time-consuming and high cost. However, space-based synthetic aperture radar (SAR) interferometry has been widely used as a powerful tool to monitor surfaces' displacement with high spatial resolution and high accuracy from mm to cm-scale, regardless of day-or-night and weather conditions. In this paper, advanced interferometric approaches have been applied to get a time-series of land subsidence of Mexico City using four-year-long twenty ALOS PALSAR L-band observations acquired from Feb-11, 2007 to Feb-22, 2011. We utilized persistent scatterer interferometry (PSI) and small baseline subset (SBAS) techniques to suppress atmospheric artifacts and topography errors. The results show that the maximum subsidence rates of the PSI and SBAS method were -29.5 cm/year and -27.0 cm/year, respectively. In addition, we discuss the different subsidence rates where the study area is discriminated into three districts according to distinctive geotechnical characteristics. The significant subsidence rate occurred in the lacustrine sediments with higher compressibility than harder bedrock.

Estimation of Surface fCO2 in the Southwest East Sea using Machine Learning Techniques (기계학습법을 이용한 동해 남서부해역의 표층 이산화탄소분압(fCO2) 추정)

  • HAHM, DOSHIK;PARK, SOYEONA;CHOI, SANG-HWA;KANG, DONG-JIN;RHO, TAEKEUN;LEE, TONGSUP
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.24 no.3
    • /
    • pp.375-388
    • /
    • 2019
  • Accurate evaluation of sea-to-air $CO_2$ flux and its variability is crucial information to the understanding of global carbon cycle and the prediction of atmospheric $CO_2$ concentration. $fCO_2$ observations are sparse in space and time in the East Sea. In this study, we derived high resolution time series of surface $fCO_2$ values in the southwest East Sea, by feeding sea surface temperature (SST), salinity (SSS), chlorophyll-a (CHL), and mixed layer depth (MLD) values, from either satellite-observations or numerical model outputs, to three machine learning models. The root mean square error of the best performing model, a Random Forest (RF) model, was $7.1{\mu}atm$. Important parameters in predicting $fCO_2$ in the RF model were SST and SSS along with time information; CHL and MLD were much less important than the other parameters. The net $CO_2$ flux in the southwest East Sea, calculated from the $fCO_2$ predicted by the RF model, was $-0.76{\pm}1.15mol\;m^{-2}yr^{-1}$, close to the lower bound of the previous estimates in the range of $-0.66{\sim}-2.47mol\;m^{-2}yr^{-1}$. The time series of $fCO_2$ predicted by the RF model showed a significant variation even in a short time interval of a week. For accurate evaluation of the $CO_2$ flux in the Ulleung Basin, it is necessary to conduct high resolution in situ observations in spring when $fCO_2$ changes rapidly.

Very short-term rainfall prediction based on radar image learning using deep neural network (심층신경망을 이용한 레이더 영상 학습 기반 초단시간 강우예측)

  • Yoon, Seongsim;Park, Heeseong;Shin, Hongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.12
    • /
    • pp.1159-1172
    • /
    • 2020
  • This study applied deep convolution neural network based on U-Net and SegNet using long period weather radar data to very short-term rainfall prediction. And the results were compared and evaluated with the translation model. For training and validation of deep neural network, Mt. Gwanak and Mt. Gwangdeoksan radar data were collected from 2010 to 2016 and converted to a gray-scale image file in an HDF5 format with a 1km spatial resolution. The deep neural network model was trained to predict precipitation after 10 minutes by using the four consecutive radar image data, and the recursive method of repeating forecasts was applied to carry out lead time 60 minutes with the pretrained deep neural network model. To evaluate the performance of deep neural network prediction model, 24 rain cases in 2017 were forecast for rainfall up to 60 minutes in advance. As a result of evaluating the predicted performance by calculating the mean absolute error (MAE) and critical success index (CSI) at the threshold of 0.1, 1, and 5 mm/hr, the deep neural network model showed better performance in the case of rainfall threshold of 0.1, 1 mm/hr in terms of MAE, and showed better performance than the translation model for lead time 50 minutes in terms of CSI. In particular, although the deep neural network prediction model performed generally better than the translation model for weak rainfall of 5 mm/hr or less, the deep neural network prediction model had limitations in predicting distinct precipitation characteristics of high intensity as a result of the evaluation of threshold of 5 mm/hr. The longer lead time, the spatial smoothness increase with lead time thereby reducing the accuracy of rainfall prediction The translation model turned out to be superior in predicting the exceedance of higher intensity thresholds (> 5 mm/hr) because it preserves distinct precipitation characteristics, but the rainfall position tends to shift incorrectly. This study are expected to be helpful for the improvement of radar rainfall prediction model using deep neural networks in the future. In addition, the massive weather radar data established in this study will be provided through open repositories for future use in subsequent studies.

Non-astronomical Tides and Monthly Mean Sea Level Variations due to Differing Hydrographic Conditions and Atmospheric Pressure along the Korean Coast from 1999 to 2017 (한국 연안에서 1999년부터 2017년까지 해수물성과 대기압 변화에 따른 계절 비천문조와 월평균 해수면 변화)

  • BYUN, DO-SEONG;CHOI, BYOUNG-JU;KIM, HYOWON
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.1
    • /
    • pp.11-36
    • /
    • 2021
  • The solar annual (Sa) and semiannual (Ssa) tides account for much of the non-uniform annual and seasonal variability observed in sea levels. These non-equilibrium tides depend on atmospheric variations, forced by changes in the Sun's distance and declination, as well as on hydrographic conditions. Here we employ tidal harmonic analyses to calculate Sa and Ssa harmonic constants for 21 Korean coastal tidal stations (TS), operated by the Korea Hydrographic and Oceanographic Agency. We used 19 year-long (1999 to 2017) 1 hr-interval sea level records from each site, and used two conventional harmonic analysis (HA) programs (Task2K and UTide). The stability of Sa harmonic constants was estimated with respect to starting date and record length of the data, and we examined the spatial distribution of the calculated Sa and Ssa harmonic constants. HA was performed on Incheon TS (ITS) records using 369-day subsets; the first start date was January 1, 1999, the subsequent data subset starting 24 hours later, and so on up until the final start date was December 27, 2017. Variations in the Sa constants produced by the two HA packages had similar magnitudes and start date sensitivity. Results from the two HA packages had a large difference in phase lag (about 78°) but relatively small amplitude (<1 cm) difference. The phase lag difference occurred in large part since Task2K excludes the perihelion astronomical variable. Sensitivity of the ITS Sa constants to data record length (i.e., 1, 2, 3, 5, 9, and 19 years) was also tested to determine the data length needed to yield stable Sa results. HA results revealed that 5 to 9 year sea level records could estimate Sa harmonic constants with relatively small error, while the best results are produced using 19 year-long records. As noted earlier, Sa amplitudes vary with regional hydrographic and atmospheric conditions. Sa amplitudes at the twenty one TS ranged from 15.0 to 18.6 cm, 10.7 to 17.5 cm, and 10.5 to 13.0 cm, along the west coast, south coast including Jejudo, and east coast including Ulleungdo, respectively. Except at Ulleungdo, it was found that the Ssa constituent contributes to produce asymmetric seasonal sea level variation and it delays (hastens) the highest (lowest) sea levels. Comparisons between monthly mean, air-pressure adjusted, and steric sea level variations revealed that year-to-year and asymmetric seasonal variations in sea levels were largely produced by steric sea level variation and inverted barometer effect.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.

Characteristics of Spectra of Daily Satellite Sea Surface Temperature Composites in the Seas around the Korean Peninsula (한반도 주변해역 일별 위성 해수면온도 합성장 스펙트럼 특성)

  • Woo, Hye-Jin;Park, Kyung-Ae;Lee, Joon-Soo
    • Journal of the Korean earth science society
    • /
    • v.42 no.6
    • /
    • pp.632-645
    • /
    • 2021
  • Satellite sea surface temperature (SST) composites provide important data for numerical forecasting models and for research on global warming and climate change. In this study, six types of representative SST composite database were collected from 2007 to 2018 and the characteristics of spatial structures of SSTs were analyzed in seas around the Korean Peninsula. The SST composite data were compared with time series of in-situ measurements from ocean meteorological buoys of the Korea Meteorological Administration by analyzing the maximum value of the errors and its occurrence time at each buoy station. High differences between the SST data and in-situ measurements were detected in the western coastal stations, in particular Deokjeokdo and Chilbaldo, with a dominant annual or semi-annual cycle. In Pohang buoy, a high SST difference was observed in the summer of 2013, when cold water appeared in the surface layer due to strong upwelling. As a result of spectrum analysis of the time series SST data, daily satellite SSTs showed similar spectral energy from in-situ measurements at periods longer than one month approximately. On the other hand, the difference of spectral energy between the satellite SSTs and in-situ temperature tended to magnify as the temporal frequency increased. This suggests a possibility that satellite SST composite data may not adequately express the temporal variability of SST in the near-coastal area. The fronts from satellite SST images revealed the differences among the SST databases in terms of spatial structure and magnitude of the oceanic fronts. The spatial scale expressed by the SST composite field was investigated through spatial spectral analysis. As a result, the high-resolution SST composite images expressed the spatial structures of mesoscale ocean phenomena better than other low-resolution SST images. Therefore, in order to express the actual mesoscale ocean phenomenon in more detail, it is necessary to develop more advanced techniques for producing the SST composites.

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.

A Method of Reproducing the CCT of Natural Light using the Minimum Spectral Power Distribution for each Light Source of LED Lighting (LED 조명의 광원별 최소 분광분포를 사용하여 자연광 색온도를 재현하는 방법)

  • Yang-Soo Kim;Seung-Taek Oh;Jae-Hyun Lim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.19-26
    • /
    • 2023
  • Humans have adapted and evolved to natural light. However, as humans stay in indoor longer in modern times, the problem of biorhythm disturbance has been induced. To solve this problem, research is being conducted on lighting that reproduces the correlated color temperature(CCT) of natural light that varies from sunrise to sunset. In order to reproduce the CCT of natural light, multiple LED light sources with different CCTs are used to produce lighting, and then a control index DB is constructed by measuring and collecting the light characteristics of the combination of input currents for each light source in hundreds to thousands of steps, and then using it to control the lighting through the light characteristic matching method. The problem with this control method is that the more detailed the steps of the combination of input currents, the more time and economic costs are incurred. In this paper, an LED lighting control method that applies interpolation and combination calculation based on the minimum spectral power distribution information for each light source is proposed to reproduce the CCT of natural light. First, five minimum SPD information for each channel was measured and collected for the LED lighting, which consisted of light source channels with different CCTs and implemented input current control function of a 256-steps for each channel. Interpolation calculation was performed to generate SPD of 256 steps for each channel for the minimum SPD information, and SPD for all control combinations of LED lighting was generated through combination calculation of SPD for each channel. Illuminance and CCT were calculated through the generated SPD, a control index DB was constructed, and the CCT of natural light was reproduced through a matching technique. In the performance evaluation, the CCT for natural light was provided within the range of an average error rate of 0.18% while meeting the recommended indoor illumination standard.

Background effect on the measurement of trace amount of uranium by thermal ionization mass spectrometry (열이온화 질량분석에 의한 극미량 우라늄 정량에 미치는 바탕값 영향)

  • Jeon, Young-Shin;Park, Yong-Joon;Joe, Kih-Soo;Han, Sun-Ho;Song, Kyu-Seok
    • Analytical Science and Technology
    • /
    • v.21 no.6
    • /
    • pp.487-494
    • /
    • 2008
  • An experiment was performed for zone refined Re-filament and normal (nonzone refined) Re-filament to reduce the background effect on the measurement of low level uranium samples. From both filaments, the signals which seemed to come from a cluster of light alkali elements, $(^{39}K_6)^+$, $(^{39}K_5+^{41}K)^+$ and $PbO_2$ were identified as the isobaric effect of the uranium isotopes. The isobaric effect signal was completely disappeared by heating the filament about $2000^{\circ}C$ at < $10^{-7}$ torr of vacuum for more than 1.5 hour in zone refined Refilaments, while that from the normal Re-filaments was not disappeared completely and was still remained as 3 pg. of uranium as the impurities after the degassing treatment was performed for more than 5 hours at the same condition of zone refined filaments. A threshold condition eliminating impurities were proved to be at 5 A and 30 minutes of degassing time. The uranium content as an impurity in rhenium filament was checked with a filament degassing treatment using the U-233 spike by isotope dilution mass spectrometry. A 0.31 ng of U was detected in rhenium filament without degassing, while only 3 pg of U was detected with baking treatment at a current of 5.5 A for 1 hr. Using normal Re-filaments for the ultra trace of uranium sample analysis had something problem because uranium remains to be 3 pg on the filament even though degassed for long hours. If the 1 ng uranium were measured, 0.3% error occurred basically. It was also conformed that ionization filament current was recommended not to be increased over 5.5 A to reduce the background. Finally, the contents of uranium isotopes in uranium standard materials (KRISS standard material and NIST standard materials, U-005 and U-030) were measured and compared with certified values. The differences between them showed 0.04% for U-235, 2% for U-234 and 2% for U-236, respectively.

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.