• Title/Summary/Keyword: input estimation

Search Result 1,826, Processing Time 0.031 seconds

Estimation of Carbon Emission and LCA (Life Cycle Assessment) from Soybean (Glycine max L.) Production System (콩의 생산과정에서 발생하는 탄소배출량 산정 및 전과정평가)

  • So, Kyu-Ho;Lee, Gil-Zae;Kim, Gun-Yeob;Jeong, Hyun-Cheol;Ryu, Jong-Hee;Park, Jung-Ah;Lee, Deog-Bae
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.6
    • /
    • pp.898-903
    • /
    • 2010
  • This study was carried out to estimate carbon emission using LCA (Life Cycle Assessment) and to establish LCI (Life Cycle Inventory) database of soybean production system. Based on collecting the data for operating LCI, it was shown that input of organic fertilizer was value of 3.10E+00 kg $kg^{-1}$ soybean and it of mineral fertilizer was 4.57E-01 kg $kg^{-1}$ soybean for soybean cultivation. It was the highest value among input for soybean production. And direct field emission was 1.48E-01 kg $kg^{-1}$ soybean during soybean cropping. The result of LCI analysis focussed on greenhouse gas (GHG) was showed that carbon footprint was 3.36E+00 kg $CO_2$-eq $kg^{-1}$ soybean. Especially $CO_2$ for 71% of the GHG emission. Also of the GHG emission $CH_4$, and $N_2O$ were estimated to be 18% and 11%, respectively. It might be due to emit from mainly fertilizer production (92%) and soybean cultivation (7%) for soybean production system. $N_2O$ was emitted from soybean cropping for 67% of the GHG emission. In $CO_2$-eq. value, $CO_2$ and $N_2O$ were 2.36E+00 kg $CO_2$-eq. $kg^{-1}$ soybean and 3.50E-01 kg $CO_2$-eq. $kg^{-1}$ soybean, respectively. With LCIA (Life Cycle Impact Assessment) for soybean production system, it was observed that the process of fertilizer production might be contributed to approximately 90% of GWP (global warming potential). Characterization value of GWP was 3.36E+00 kg $CO_2$-eq $kg^{-1}$.

A fundamental study on the automation of tunnel blasting design using a machine learning model (머신러닝을 이용한 터널발파설계 자동화를 위한 기초연구)

  • Kim, Yangkyun;Lee, Je-Kyum;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.5
    • /
    • pp.431-449
    • /
    • 2022
  • As many tunnels generally have been constructed, various experiences and techniques have been accumulated for tunnel design as well as tunnel construction. Hence, there are not a few cases that, for some usual tunnel design works, it is sufficient to perform the design by only modifying or supplementing previous similar design cases unless a tunnel has a unique structure or in geological conditions. In particular, for a tunnel blast design, it is reasonable to refer to previous similar design cases because the blast design in the stage of design is a preliminary design, considering that it is general to perform additional blast design through test blasts prior to the start of tunnel excavation. Meanwhile, entering the industry 4.0 era, artificial intelligence (AI) of which availability is surging across whole industry sector is broadly utilized to tunnel and blasting. For a drill and blast tunnel, AI is mainly applied for the estimation of blast vibration and rock mass classification, etc. however, there are few cases where it is applied to blast pattern design. Thus, this study attempts to automate tunnel blast design by means of machine learning, a branch of artificial intelligence. For this, the data related to a blast design was collected from 25 tunnel design reports for learning as well as 2 additional reports for the test, and from which 4 design parameters, i.e., rock mass class, road type and cross sectional area of upper section as well as bench section as input data as well as16 design elements, i.e., blast cut type, specific charge, the number of drill holes, and spacing and burden for each blast hole group, etc. as output. Based on this design data, three machine learning models, i.e., XGBoost, ANN, SVM, were tested and XGBoost was chosen as the best model and the results show a generally similar trend to an actual design when assumed design parameters were input. It is not enough yet to perform the whole blast design using the results from this study, however, it is planned that additional studies will be carried out to make it possible to put it to practical use after collecting more sufficient blast design data and supplementing detailed machine learning processes.

Forward Osmotic Pressure-Free (△𝜋≤0) Reverse Osmosis and Osmotic Pressure Approximation of Concentrated NaCl Solutions (정삼투-무삼투압차(△𝜋≤0) 법 역삼투 해수 담수화 및 고농도 NaCl 용액의 삼투압 근사식)

  • Chang, Ho Nam;Choi, Kyung-Rok;Jung, Kwonsu;Park, Gwon Woo;Kim, Yeu-Chun;Suh, Charles;Kim, Nakjong;Kim, Do Hyun;Kim, Beom Su;Kim, Han Min;Chang, Yoon-Seok;Kim, Nam Uk;Kim, In Ho;Kim, Kunwoo;Lee, Habit;Qiang, Fei
    • Membrane Journal
    • /
    • v.32 no.4
    • /
    • pp.235-252
    • /
    • 2022
  • Forward osmotic pressure-free reverse osmosis (Δ𝜋=0 RO) was invented in 2013. The first patent (US 9,950,297 B2) was registered on April 18, 2018. The "Osmotic Pressure of Concentrated Solutions" in JACS (1908) by G.N. Lewis of MIT was used for the estimation. The Chang's RO system differs from conventional RO (C-RO) in that two-chamber system of osmotic pressure equalizer and a low-pressure RO system while C-RO is based on a single chamber. Chang claimed that all aqueous solutions, including salt water, regardless of its osmotic pressure can be separated into water and salt. The second patent (US 10.953.367B2, March 23, 2021) showed that a low-pressure reverse osmosis is possible for 3.0% input at Δ𝜋 of 10 to 12 bar. Singularity ZERO reverse osmosis from his third patent (Korea patent 10-22322755, US-PCT/KR202003595) for a 3.0% NaCl input, 50% more water recovery, use of 1/3 RO membrane area, and 1/5th of theoretical energy. These numbers come from Chang's laboratory experiments and theoretical analysis. Relative residence time (RRT) of feed and OE chambers makes Δ𝜋 to zero or negative by recycling enriched feed flow. The construction cost by S-ZERO was estimated to be around 50~60% of the current RO system.

Estimation of Economic Impact on the Air Transport Industry based on the Volcanic Ash Dispersion Scenario of Mt. Baekdu (백두산 화산재 확산 시나리오에 따른 항공산업의 경제적 피해 예측)

  • Kim, Su-Do;Lee, Yeonjeong;Yoon, Seong-Min
    • Journal of International Area Studies (JIAS)
    • /
    • v.18 no.3
    • /
    • pp.109-144
    • /
    • 2014
  • In 2010, large areas of European airspace were closed by the volcanic ash generated by the eruption of Icelandic volcano and it disrupted global trade, business and travel which caused a huge economic damage on the air transport industry. This brought concerned about the economic impact by the eruption of Mt. Baekdu volcano. In this paper, we analyze the affected areas of the air transport industry were decided by calculating the PM10 density of volcanic ash changed over time and by determining the safe upper limit of ash density in their airspace. We separate the sales in the air transport industry according to each airline, airport, and month to estimate the direct losses when all flights inside a restricted zone were canceled. Also, we estimate the indirect losses in regional output, income, and value-added of the different major industries using interindustry (input-output) analysis. There is no direct damage from VEI 1 to VEI 5. But when VEI is 6, all flights to and from Yangyang airport will be canceled due to the No Fly Zone. And some flights to and from the airports Gimhae, Ulsan and Pohang will be restricted due to the Time Limited Zone. When VEI is 7, Yangyang, Gimhae, Ulsan, Pohang and Daegu airports will be closed and all flights will be canceled and delayed. During this time, the total economic losses on the air transport industry are estimated at 8.1 billion won(direct losses of about 3.55 billion won, indirect losses of about 4.57 billion won). Gimhae international airport accounted for 92% of the total loss and is the most affected area according to the volcanic ash scenario of Mt. Baekdu.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.

Analysis of Nutrient Cycling Structure of a Korean Beef Cattle Farm Combined with Cropping as Affected by Bedding Material Types (깔개물질의 종류에 따른 한우-경종 결합 농가의 양분순환 구조 분석)

  • Lim, Sang-Sun;Kwak, Jin-Hyeob;Park, Hyun-Jung;Lee, Sun-Il;Lee, Dong-Suk;Kim, Yong-Soon;Yun, Bong-Ki;Kim, Sun-Woo;Choi, Woo-Jung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.41 no.5
    • /
    • pp.354-361
    • /
    • 2008
  • In this study, we analyzed nutrient cycling structure of a small farm (cattle of 100 heads and arable lands of 2.5 ha) in Jeonnam province to investigate the effects of nutrients input by the addition of bedding materials (sawdust and rice hull) and nutrients loss before the application to the soils (the period during manure storage in the feedlot and composting process) on nutrient cycling structure. Sawdust and rice hull added as bedding materials increased N by 1.6% and 14.2% and $P_2O_5$ by 3.1% and 27.4%, respectively, relative to the amount of nutrients produced by excretion. This result suggests that the addition of nutrients via bedding materials should be considered for better estimation of nutrient balance. The most significant characteristics of the nutrient cycling structure was loss of mass and nutrients during the storage (21 days) and composting period (90 days). During this period, 78.4% of N and 9.5% of $P_2O_5$ was lost from sawdust compost; meanwhile, the percentages of loss for rice hull compost were 81.6% and 10.3%, respectively. A lower percentage of nutrients loss in sawdust compost than that in rice hull compost was attributed to the relatively slow decomposition rate of organic materials in the sawdust compost which has higher C/N ratio and lignin contents. Therefore, it was concluded that estimation of nutrient balance should be conducted based on nutrient contents in the final compost being applied to the lands rather than the amount of nutrients contained in the livestock excretion. In addition, the effects of bedding materials on nutrient losses should be also taken into account.

Evaluation of MODIS-derived Evapotranspiration at the Flux Tower Sites in East Asia (동아시아 지역의 플럭스 타워 관측지에 대한 MODIS 위성영상 기반의 증발산 평가)

  • Jeong, Seung-Taek;Jang, Keun-Chang;Kang, Sin-Kyu;Kim, Joon;Kondo, Hiroaki;Gamo, Minoru;Asanuma, Jun;Saigusa, Nobuko;Wang, Shaoqiang;Han, Shijie
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.4
    • /
    • pp.174-184
    • /
    • 2009
  • Evapotranspiration (ET) is one of the major hydrologic processes in terrestrial ecosystems. A reliable estimation of spatially representavtive ET is necessary for deriving regional water budget, primary productivity of vegetation, and feedbacks of land surface to regional climate. Moderate resolution imaging spectroradiometer (MODIS) provides an opportunity to monitor ET for wide area at daily time scale. In this study, we applied a MODIS-based ET algorithm and tested its reliability for nine flux tower sites in East Asia. This is a stand-alone MODIS algorithm based on the Penman-Monteith equation and uses input data derived from MODIS. Instantaneous ET was estimated and scaled up to daily ET. For six flux sites, the MODIS-derived instantaneous ET showed a good agreement with the measured data ($r^2=0.38$ to 0.73, ME = -44 to $+31W\;m^{-2}$, RMSE =48 to $111W\;m^{-2}$). However, for the other three sites, a poor agreement was observed. The predictability of MODIS ET was improved when the up-scaled daily ET was used ($r^2\;=\;0.48$ to 0.89, ME = -0.7 to $-0.6\;mm\;day^{-1}$, $RMSE=\;0.5{\sim}1.1\;mm\;day^{-1}$). Errors in the canopy conductance were identified as a primary factor of uncertainty in MODIS-derived ET and hence, a more reliable estimation of canopy conductance is necessary to increase the accuracy of MODIS ET.

Development of a deep neural network model to estimate solar radiation using temperature and precipitation (온도와 강수를 이용하여 일별 일사량을 추정하기 위한 심층 신경망 모델 개발)

  • Kang, DaeGyoon;Hyun, Shinwoo;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.2
    • /
    • pp.85-96
    • /
    • 2019
  • Solar radiation is an important variable for estimation of energy balance and water cycle in natural and agricultural ecosystems. A deep neural network (DNN) model has been developed in order to estimate the daily global solar radiation. Temperature and precipitation, which would have wider availability from weather stations than other variables such as sunshine duration, were used as inputs to the DNN model. Five-fold cross-validation was applied to train and test the DNN models. Meteorological data at 15 weather stations were collected for a long term period, e.g., > 30 years in Korea. The DNN model obtained from the cross-validation had relatively small value of RMSE ($3.75MJ\;m^{-2}\;d^{-1}$) for estimates of the daily solar radiation at the weather station in Suwon. The DNN model explained about 68% of variation in observed solar radiation at the Suwon weather station. It was found that the measurements of solar radiation in 1985 and 1998 were considerably low for a small period of time compared with sunshine duration. This suggested that assessment of the quality for the observation data for solar radiation would be needed in further studies. When data for those years were excluded from the data analysis, the DNN model had slightly greater degree of agreement statistics. For example, the values of $R^2$ and RMSE were 0.72 and $3.55MJ\;m^{-2}\;d^{-1}$, respectively. Our results indicate that a DNN would be useful for the development a solar radiation estimation model using temperature and precipitation, which are usually available for downscaled scenario data for future climate conditions. Thus, such a DNN model would be useful for the impact assessment of climate change on crop production where solar radiation is used as a required input variable to a crop model.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.