The rise in temperature induced by global warming caused in El Nino and La Nina, and abnormally changed the temperature of seawater. Rainfall concentrates in some locations due to abnormal variations in seawater temperature, causing frequent abnormal floods. It is important to rapidly detect flooded regions to recover and prevent human and property damage caused by floods. This is possible with synthetic aperture radar. This study aims to generate a model that directly derives flood-damaged areas by using modified U-NET and TerraSAR-X images based on Multi Kernel to reduce the effect of speckle noise through various characteristic map extraction and using two images before and after flooding as input data. To that purpose, two synthetic aperture radar (SAR) images were preprocessed to generate the model's input data, which was then applied to the modified U-NET structure to train the flood detection deep learning model. Through this method, the flood area could be detected at a high level with an average F1 score value of 0.966. This result is expected to contribute to the rapid recovery of flood-stricken areas and the derivation of flood-prevention measures.
Park, Seong Wan;Hwang, Jung Joon;Hwang, Kyu Young;Park, Hee Mun
KSCE Journal of Civil and Environmental Engineering Research
/
v.26
no.5D
/
pp.797-804
/
2006
A development of regression model for asphalt concrete pavements using Falling Weight Deflectometer deflections is presented in this paper. A backcalculation program based on layered elastic theory was used to generate the synthetic modulus database, which was used to generate 95% confidence intervals of modulus in each layer. Using deflection basins of FWD data used in developing this procedure were collected from Pavement Management System in flexible pavements. Assumptions of back-calculation are that one is 3 layered flexible pavement structure and another is depth to bedrock is finite. It is found that difference of between 95% confidence intervals and modulus ranges of other papers does not exist. So, the data of 95% confidence intervals in each layer was used to develop multiple regression models. Multiple regression equations of each layer were established by SPSS, package of Statics analysis. These models were proved by regression diagnostics, which include case analysis, multi-collinearity analysis, influence diagnostics and analysis of variance. And these models have higher degree of coefficient of determination than 0.75. So this models were applied to predict modulus of domestic asphalt concrete pavement at FWD field test.
KIPS Transactions on Software and Data Engineering
/
v.7
no.9
/
pp.351-360
/
2018
The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.
Data is the most important asset in the financial sector. On average, 71 percent of financial institutions generate competitive advantage over data analysis. In particular, in the card industry, the card transaction data is widely used in the development of merchant information, economic fluctuations, and information services by analyzing patterns of consumer behavior and preference trends of all customers. However, creation of new value through fusion of data is insufficient. This study introduces the analysis and forecasting of consumption trends of credit card companies which convergently analyzed the social data and the sales data of the company's own. BC Card developed an algorithm for linking card and social data with trend profiling, and developed a visualization system for analysis contents. In order to verify the performance, BC card analyzed the trends related to 'Six Pocket' and conducted th pilot marketing campaign. As a result, they increased marketing multiplier by 40~100%. This study has implications for creating a methodology and case for analyzing the convergence of structured and unstructured data analysis that have been done separately in the past. This will provide useful implications for future trends not only in card industry but also in other industries.
KIPS Transactions on Software and Data Engineering
/
v.9
no.1
/
pp.25-32
/
2020
In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.
Recent advancements in data measuring technology have facilitated the installation of various sensors, such as pressure meters and flow meters, to effectively assess the real-time conditions of water distribution systems (WDSs). However, as cities expand extensively, the factors that impact the reliability of measurements have become increasingly diverse. In particular, demand data, one of the most significant hydraulic variable in WDS, is challenging to be measured directly and is prone to missing values, making the development of accurate data generation models more important. Therefore, this paper proposes an adversarially trained autoencoder (ATAE) model based on generative deep learning techniques to accurately estimate demand data in WDSs. The proposed model utilizes two neural networks: a generative network and a discriminative network. The generative network generates demand data using the information provided from the measured pressure data, while the discriminative network evaluates the generated demand outputs and provides feedback to the generator to learn the distinctive features of the data. To validate its performance, the ATAE model is applied to a real distribution system in Austin, Texas, USA. The study analyzes the impact of data uncertainty by calculating the accuracy of ATAE's prediction results for varying levels of uncertainty in the demand and the pressure time series data. Additionally, the model's performance is evaluated by comparing the results for different data collection periods (low, average, and high demand hours) to assess its ability to generate demand data based on water consumption levels.
Kim, Sang-Gu;Han, Eum;Lee, Hwan-Pil;Kim, Hae;Yun, Ilsoo
International Journal of Highway Engineering
/
v.18
no.5
/
pp.117-125
/
2016
PURPOSES : The density in uninterrupted traffic flow facilities plays an important role in representing the current status of traffic flow. For example, the density is used for the primary measures of effectiveness in the capacity analysis for freeway facilities. Therefore, the estimation of density has been a long and tough task for traffic engineers for a long time. This study was initiated to evaluate the performance of density values that were estimated using VDS data and two traditional methods, including a method using traffic flow theory and another method using occupancy by comparing the density values estimated using vehicular trajectory data generated from a radar detector. METHODS : In this study, a radar detector which can generate very accurate vehicular trajectory within the range of 250 m on the Joongbu expressway near to Dongseoul tollgate, where two VDS were already installed. The first task was to estimate densities using different data and methods. Thus, the density values were estimated using two traditional methods and the VDS data on the Joongbu expressway. The density values were compared with those estimated using the vehicular trajectory data in order to evaluate the quality of density estimation. Then, the relationship between the space mean speed and density were drawn using two sets of densities and speeds based on the VDS data and one set of those using the radar detector data. CONCLUSIONS : As a result, the three sets of density showed minor differences when the density values were under 20 vehicles per km per lane. However, as the density values become greater than 20 vehicles per km per lane, the three methods showed a significant difference among on another. The density using the vehicular trajectory data showed the lowest values in general. Based on the in-depth study, it was found out that the space mean speed plays a critical role in the calculation of density. The speed estimated from the VDS data was higher than that from the radar detector. In order to validate the difference in the speed data, the traffic flow models using the relationships between the space mean speed and the density were carefully examined in this study. Conclusively, the traffic flow models generated using the radar data seems to be more realistic.
Nawaz, Javeria;Arshad, Muhammad Zeeshan;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
Proceedings of the Korean Vacuum Society Conference
/
2012.02a
/
pp.239-240
/
2012
With advancements in semiconductor device technologies, manufacturing processes are getting more complex and it became more difficult to maintain tighter process control. As the number of processing step increased for fabricating complex chip structure, potential fault inducing factors are prevail and their allowable margins are continuously reduced. Therefore, one of the key to success in semiconductor manufacturing is highly accurate and fast fault detection and classification at each stage to reduce any undesired variation and identify the cause of the fault. Sensors in the equipment are used to monitor the state of the process. The idea is that whenever there is a fault in the process, it appears as some variation in the output from any of the sensors monitoring the process. These sensors may refer to information about pressure, RF power or gas flow and etc. in the equipment. By relating the data from these sensors to the process condition, any abnormality in the process can be identified, but it still holds some degree of certainty. Our hypothesis in this research is to capture the features of equipment condition data from healthy process library. We can use the health data as a reference for upcoming processes and this is made possible by mathematically modeling of the acquired data. In this work we demonstrate the use of recurrent neural network (RNN) has been used. RNN is a dynamic neural network that makes the output as a function of previous inputs. In our case we have etch equipment tool set data, consisting of 22 parameters and 9 runs. This data was first synchronized using the Dynamic Time Warping (DTW) algorithm. The synchronized data from the sensors in the form of time series is then provided to RNN which trains and restructures itself according to the input and then predicts a value, one step ahead in time, which depends on the past values of data. Eight runs of process data were used to train the network, while in order to check the performance of the network, one run was used as a test input. Next, a mean squared error based probability generating function was used to assign probability of fault in each parameter by comparing the predicted and actual values of the data. In the future we will make use of the Bayesian Networks to classify the detected faults. Bayesian Networks use directed acyclic graphs that relate different parameters through their conditional dependencies in order to find inference among them. The relationships between parameters from the data will be used to generate the structure of Bayesian Network and then posterior probability of different faults will be calculated using inference algorithms.
The Journal of The Korea Institute of Intelligent Transport Systems
/
v.19
no.5
/
pp.70-81
/
2020
Point-by-point data, which is abundantly collected by vehicles with embedded GPS (Global Positioning System), generate useful information. These data facilitate decisions by transportation jurisdictions, and private vendors can monitor and investigate micro-scale driver behavior, traffic flow, and roadway movements. The information is applied to develop app-based route guidance and business models. Of these, speed data play a vital role in developing key parameters and applying agent-based information and services. Nevertheless, link speed values require different levels of physical storage and fidelity, depending on both collecting and reporting intervals. Given these circumstances, this study aimed to establish an appropriate collection interval to efficiently utilize Space Mean Speed information by vehicles with embedded GPS. We conducted a comparison of Probe-vehicle data and Image-based vehicle data to understand PE(Percentage Error). According to the study results, the PE of the Probe-vehicle data showed a 95% confidence level within an 8-second interval, which was chosen as the appropriate collection interval for Probe-vehicle data. It is our hope that the developed guidelines facilitate C-ITS, and autonomous driving service providers will use more reliable Space Mean Speed data to develop better related C-ITS and autonomous driving services.
Journal of the Korean Society of Marine Environment & Safety
/
v.21
no.5
/
pp.599-607
/
2015
The purpose of this study is to reproduce long-term wave fields in coastal waters of Korea based on wave hind-casting modelling and discuss its applications. To validate wind data(NCEP, ECMWF, JMA-MSM), comparison of wind data was done with wave buoy data. JMA-MSM predicted wind data with high accuracy. But due to relatively longer period of ECMWF wind data as compared to that of JMA-MSM, wind data set of ECMWF(2001~2014) was used to perform wave hind-casting modelling. Results from numerical modelling were verified with the observed data of wave buoys installed by Korea Meteorological Administration(KMA) and Korea Hydrographic and Oceanographic Agency(KHOA) on offshore waters. The results agree well with observations at buoy stations, especially during the event periods such as a typhoon. Consequently, the wave data reproduced by wave hind-casting modelling was used to obtain missing data in wave observation buoys. The obtained missing data indicated underestimation of maximum wave height during the event period at some points of buoys. Reasons for such underestimation may be due to larger time interval and resolution of the input wind data, water depth and grid size etc. The methodology used in present study can be used to analyze coastal erosion data in conjunction with a wave characteristic of the event period in coastal areas. Additionally, the method can be used in the coastal disaster vulnerability assessment to generate wave points of interest.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.